Hacker Newsnew | past | comments | ask | show | jobs | submit | muvlon's commentslogin

The article names a lot of other things that AI is being used for besides scamming the elderly, such as making us distrust everything we see online, generating sexually explicit pictures of women without their consent, stealing all kinds of copyrighted material, driving autonomous killer drones and more generally sucking the joy out of everything.

And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.


Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.

This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.


AI doesn't encompass any "human behaviours", the humans controlling it do. Grok doesn't generate nude pictures of women because it wants to, it does it because people tell it to and it has (or had) no instructions to the contrary

> bad actors quickly realized this is a force multiplication factor for them

You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).

Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.


You’re way off base. It can also create sexually explicit pictures of men.

Not sure if you're being sarcastic, but women are disproportionately affected by this than men.

So everything that was already done before generative AI.

It's true, making these things easier and faster and more accessible really doesn't matter

I actually love this about Linux. The syscall API is much better than libc (both the one defined by POSIX and libc as it actually exists on different Unixen). No errno (which requires weird and inefficient TLS bullshit), no hooks like atfork/atexit/etc., no locales, no silly non-reentrant functions using global variables, no dlopen/dlclose. Just the actual OS interface. Languages that aren't as braindead as C can have their own wrappers around this and skip all that nonsense.

Also, there are syscalls which are basically not possible to directly expose as C functions, because they mess with things that the C runtime considers invariant. An example would be `SYS_clone3`. This is an immensely useful syscall, and glibc uses it for spawning threads these days. But it cannot be called directly from C, you need platform-specific assembly code around it.


> But it cannot be called directly from C

No system call can, you need a wrapper like syscall() provided by glibc. glibc also provides a dedicated wrapper for the clone system call which properly sets up the return address for the child thread. No idea what you're angry about


Sure, you need a tiny bit of asm to do the actual syscall. That's not what I'm talking about. Most syscalls are easy to wrap, clone is slightly harder but doable (as evidenced by glibc). clone3 is for all intents and purposes impossible to write a general C wrapper for. It allows you to create situations such as threads that share virtual memory but not file descriptors, or vice-versa. That is, it can leave the caller in a situation that violates core assumptions by libc.

You're mixing things up. C the language doesn't know about virtual memory or file descriptions. Those are OS features.

The C library maintains its own set of file descriptors, which are mapped to the OS file descriptors (because the stdio file descriptors and the OS file descriptors have different types and different behaviors).

I do not know whether this is true, but perhaps the previous poster means that using clone3 with certain arguments may break this file descriptor mapping so invoking after that stdio functions may have unexpected results.

Also the state kept by the libc malloc may get confused after certain invocations of clone3, because it has memory pages that have been obtained through mmap or sbrk and which may sometimes be returned to the OS.

So libc certainly cares about the OS file descriptors and virtual memory mappings, because it maintains its own internal state, which has references to the corresponding OS state. I have not looked to see when an incorrect state can result after a clone3, but it is plausible that such cases may exist, so that glibc allows calling clone3 only with a restricted combination of arguments and it does not provide a wrapper that would allow other combinations of arguments.


Yes; this is why QEMU's user-space-emulation clone syscall handling restricts the caller to only those combinations of clone flags which match either "looks like fork()" or "looks like creating a new pthread", because QEMU itself is linked with the host libc and weird clone flag combinations will put the new process/thread into a state the libc isn't expecting.

All fair points. What do other languages' standard libraries do to walk around clone3 then? If two threads share file descriptors but not virtual memory, do they perform some kind of IPC to lock them for synchronizing reads and writes?

> What do other languages' standard libraries do to walk around clone3 then?

They don't offer generic clone3 wrappers either AFAIK. All the code I've seen that uses it - and a lot of it is not in standard libraries but in e.g. container runtime implementations - has its own special-purpose code around a specific way to call it.

My point is not that other standard libraries do it better, but that clone3 as a syscall interface is highly versatile, moreso than it could be as a function in either C or most other languages. That is, the syscall API is the right layer for this feature to be.


I don't think this is inflicting net suffering, really. The money doesn't just disappear, the seller gets it. Auctions are zero-sum.

They're not zero-sum on ebay because ebay takes a percentage cut

It's sad how much of this thread of supposed hackers comes from people who are simply parroting this dogma because it has been drilled into them. People were even preaching this before IPv6 privacy extensions came into use, either downplaying the privacy issues or outright telling people they were bad for wanting privacy because IPv6 is more important.

I understand the difference between NAT and firewall perfectly well. I have deployed and configured both for many years. The strawman of "NAT without firewall" is pretty much irrelevant, because that's not what people run IRL.

Firewalls are policy-based security, NAT is namespacing. In other fields, we consider namespacing an important security mechanism. If an attacker can't even name a resource they're not allowed to access, that's quite a strong security property. And of course, anyone can spoof IP and try to send traffic to 192.168.0.6 or whatever. But if you're anywhere in the world other than right inside my ISP's access network, you can't actually get the internet to route this to my local 192.68.0.6. On the other hand, an IPv6 firewall is one misconfigured rule away from giving anybody on the planet access.


Yeah, I think it is a bit more subtle of an issue than this flamewar always descends into.

There's people upthread arguing that every cellphone in the country is on IPv6 and nobody worries about it, but I'm certain there are thousands of people getting paid salaries to worry about that for you.

Meanwhile, the problem is about the level of trust in the consumer grade router sitting on my desk over there. With IPv4 NAT it is more likely that the router will break in such a way that I won't be able to access the internet. Having NAT break in such a way that it accidentally port forwards all incoming connection attempts to my laptop sitting behind it is not a likely bug or failure mode. If it does happen, it would likely only happen to a single machine sitting behind it.

OTOH, if my laptop and every other machine on my local subnet has a public IPv6 address on it, then I'm trusting that consumer grade router to never break in such a way that the firewall default allows all for some reason--opening up every single machine on my local subnet and every single listening port. A default deny flipping to a default allow is absolutely the kind of security bug that really happens and would keep me awake at night. And even if I don't go messing around with it and screw it up myself, there's always the possibility that a software bug in a firmware upgrade causes the problem.

I'd like to know what the solution to this is, other than blind trust in the router/firewall manufacturer or setting up your own external monitoring (and testing that monitoring periodically).

Instead of just screaming about how "NAT ISN'T SECURITY" over and over, I'd like someone to just explain how to mitigate the security concerns of firewall rulesets--when so very many of us have seen firewall rulesets be misconfigured by "professionals" at our $DAYJOBs. Just telling me that every IPv6 router should have default deny rules and nobody would be that incompetent to sell a router that wouldn't be that insecure doesn't give me warm fuzzies.

I don't necessarily trust NAT more, but a random port forward rule for all ports appearing against a given target host behind it is going to be a much more unusual kind of bug than just having a default firewall rule flipped to allow.


You could set up a monitoring solution that alerts you if one of your devices is suddenly reachable from the internet via IPv6. It will probably never fire an alert but in your case might help you sleep better. IPv6 privacy extensions could help you too.

In practice I don't think it's really an issue. The IPv6 firewall will probably not break in a way that makes your device reachable from the internet. Even if it would, someone would have to know the IPv6 address of the device they want to target - which means that you have to connect to a system that they have control of first, otherwise it's unlikely they'll ever get it. Lastly, you'd have to run some kind of software on that device that has a vulnerability which can be exploited via network. Combine all that and it gets so unlikely that you'll get hacked this way that it's not worth worrying about.


Thank you. This is the first time that someone admits here that NAT actually adds some security. IPv4 will never go away less that an important share because of it's simplicity and NAT-level security it offers to millions of professionals and amateurs that tinker with their routers.

NAT introduces complexity, not simplicity.

Besides, NAT isn't a security feature.


Secure and reliable IPv6 deployment has _more_ complexity than IPv4.

SLAAC is more complex than IPv4 w/ NAT w/ DHCPv4? Serious?

Assign a /56, firewall in place already dropping anything not explicitly allowed, done.


> SLAAC is more complex than IPv4 w/ NAT w/ DHCPv4? Serious?

Yes? Has this ever been in question?

Stateful DHCP provides a _reliable_ way to configure clients, while SLAAC is anything but. It's also insufficient in itself if you want to configure things like NTP servers.

But that's not the main issue. The main issue is that with SLAAC you are supposed to hand out real routable addresses. That are _not_ controlled by you, so the end devices need to be able to handle prefix withdrawals and deprecations. This can lead to your printer not working if your ISP connection goes down and it has no more active IPv6 prefixes.

So you also need a stable ULA. But even that is not a guarantee because source IP selection rules are, shall we say, not the best.

But wait, there's more! You can trivially load-balance/failover NAT-ed IPv4 network over two WAN connections. Now try to do that with IPv6. Go on. I'll wait.


This is 100% correct, something the (dim) author of the article can't seem to understand.

Except in the real world everyone is also running UPnP, so NAT is also one misconfiguration away from exposing something publicly. In the real world your ISP might enable IPv6 one day and suddenly you do have a public address. Relying on NAT is a bad idea because it's less explicit, a firewall is saying you only want to allow these things through, of course nothing is perfect, you can mess up, but NAT is just less clear, the expectation is not "nothing behind NAT should ever be exposed", it's "we don't have enough addresses and need to share".

UPnP is not tied to NAT, where do you have this from? UPnP is used to request direct connections, a firewall can implement UPnP just as well as a NAT.

UPnP won't expose my SMB to the world on its own. For that you'd need an attacker already inside the NAT. So already on that side of the hatchway.

It's not "relying on NAT" to have it as a layer in the swiss cheese. Relying on any one thing is a bad strategy.

Sure, and that's fine, but relying on it isn't, and it isn't a reason not to use IPv6 (if you want namespacing, there are tools for that outside hiding behind a single IPv4). Hence the advice is not to rely on NAT.

This is people talking past each other, and to be fair, saying "everyone" in my post made it unclear, I was being glib in response to "because that's not what people run IRL", when evidently people do, I've seen it happen.


No, not everyone is running UPnP. Maybe on most home networks, but that’s not the audience that even knows or cares about NAT.

I think this is where the disconnect is: the home users are precisely the ones being talked about, because they are the ones most likely to be treating NAT like it is a security system for their devices in the real world.

I've literally seen someone's ISP turn on IPv6, and then have their long-running VNC service compromised because they were just relying on NAT to hide their services.


> Except in the real world everyone

...and goes on to ignore enterprise businesses, which consume most of the v4 space and are among the biggest resisters of v6.


>Except in the real world everyone is also running UPnP

Definitely not. I've been disabling that for years.


Upnp on cgnat machines? Lol.

> It's sad how much of this thread of supposed hackers comes from people who are simply parroting this dogma because it has been drilled into them.

It's only been drilled into people because it's true:

* https://blog.ipspace.net/2011/12/is-nat-security-feature/


> If an attacker can't even name a resource they're not allowed to access, that's quite a strong security property.

This is entirely incorrect. An attacker can still name a resource, it only has to guess the right port number that is mapped to that resource.

That's how NAT fundamentally works after all, it allows you to use the additional 16-bits of the port number to extend the IP address space. Any blocking of incoming traffic on a port already mapped to a local address is a firewall rule.

The reason that it offers protection is because attackers aren't going to try every single port. Compared to that IPv6 will offer more protection as an attacker would have to guess the right address in a 64-bit namespace rather than just a 16-bit one.


That's absolutely not true, because forwarding rules don't exist by default. You can try all ports and will get no answer.

But the part about the undersea cable is simply wrong! Major undersea cables have been disrupted several times and never has a "continent gone dark".

I think this betrays a severe misunderstanding of what the internet is. It is the most resilient computer network by a long shot, far more so than any of these toy meshes. For starters, none of them even manage to make any intercontinental connections except when themselves using the internet as their substrate.

Now of course, if you put all your stuff in a single organization's "cloud", you don't get to benefit from all that resilience. That sort of fragile architecture is rightly criticized but this falls flat as a criticism of the internet itself.


Car enthusiasts caring about the driving experience doesn't just mean drivability. Engine sound is a huge part of it. All the classic Porsche 911 have flat-6 engines which make a distinctive sound that is totally part of the brand.

FTR I don't care about this myself, I'm happy with my EV. But the importance of this aspect is easily missed by people not part of the target demographic.


It feels like engine sound has become more important to these people since EV's entered the market. I'm sure it was there before but not to the same extent.

The huge uproar about the 718 having a flat four turbo engine was mostly about the sound. (I don’t have a problem with it.) I think it has always been there.

It became more of a selling point as regulation came for it. OPF, stricter modification control, etc. Prior it didn't matter as much since it was always decent and you could do whatever you want to it. Now, a pops and bangs tune with a straight pipe will get your car impounded in most countries the first time a cop sees/hears you.

USB-C I guess?


Weeeeelllll that was mainstream a long long time before they adopted it. And I'm still annoyed that the only devices with Lightning in our house are my Airpods en iPhone mini 12 and wife's iPhone 14 Pro.

Always need to attach an adapter to my Anker chargers and powerbanks.


I think the person you’re replying to meant MacBooks. They were USB-C exclusively way before Windows machines.


It's funny, I was mad at them for getting rid of magsafe for years, and super excited when they brought it back with the AS macs. Used the cable for a year or two and then decided to simplify my life but just using USB C for everything.

I hope they can forgive me for doubting their benevolent wisdom, I promise never to do it again.


Same… I love MagSafe and would prefer to use it. I’m always worried about yanking the computer with the USB-C charger in and breaking the cable or the port.

But I have a bunch of USB-C stuff and so when I go to charge my laptop it’s just easier to find that cable and use it.


The battery life is sufficient that I never feel the need to leave it umbilical-ed to an outlet across the room. I'll leave it docked at my desk, or use it wirelessly, or charge it at a conference room table, or recharge it after the day is done in my hotel room as I sleep.

Thats the real difference - it now easily lasts until I would want to take an extended break anyway.


There are many magnetic USB C plugs. I am not sure if they are standard compliant but they work fine.


I might as well just use the official magsafe power cable that came with my macbook if I were to do that. The point was more convenience. I have a USB-C charger at my desk, at my bed, at the couch, etc. Anywhere I am I can just plug in without fiddling with other cables (or connectors). Ultimately I'm lazy and just want to simplify my cable management :)


Fist use of a Macbook Pro and in a sleep addled state I plugged the MagSafe cable into the Mac USB-C end first.

It’s very confusing if you do that and are an idiot.


There is not a single port on the Apple Silicon MBP that I wouldn't trade for another thunderbolt (USB-C) port.

Closest would be the SD card slot... if it was SD Express.

If they had released the M1 MBP in the old chassis I would have a real challenge upgrading to the current models.


Mag safe in the age of goof battery life


Ah ok, yeah sure, that was nice (could have added an A and HDMI port in this case, but ok, they were early with that.)


Please remove the AI parts.


> old code gets automatically compiled by the old version of the compile

That's not what happens. You always use the same version of the compiler. It's just that the newer compiler version also knows several older dialects (known as editions) of the language.


And it remains to be seen how well this approach will work as time passes and the number of versions continues to increase.


That won't be any more of a problem than the support that every existing C or C++ compiler has for targeting different versions of the standard.


Right, it's not considered weird for a C++ compiler to offer C++ 98, C++ 11, C++ 14, C++ 17, C++ 20, C++ 23 and C++ 26 (seven versions) and support its own extra dialects.

It is also usual for the C++ compilers to support all seven standard library versions too. Rust doesn't have this problem, the editions can define their own stdlib "prelude" (the reason why you can just say Vec or println! in Rust rather than their full names, a prelude is a set of use statements offered by default) but they all share the same standard library.

core::mem::uninitialized() is a bad idea and we've known that for many years, but that doesn't mean you can't use it in brand new 2024 Edition Rust code, it just means doing so is still a bad idea. In contrast C++ removes things entirely from its standard library sometimes because they're now frowned on.


Well, there's the 2015, 2018, 2021, and 2024 editions. It's been a decade and it seems to be working pretty well?


I don't know how everyone arrives at that conclusion when the cost of the subscription services is also going up (as evidenced by the very article we're talking about). People who are renting are feeling this immediately, whereas people who bought their computers can wait the price hikes out for a couple years before they really need an upgrade.


Subscriptions have a "boiling frog" phenomenon where a marginal price increase isn't noticable to most people. Our payment rails are so effective many people don't even read their credit card statements, they just have vampires draining their accounts monthly.

Starting with a low subscription price also has the effect of atrophying people's ability to self-serve. The alternative to a subscription is usually capital-intensive - if you want to cancel Netflix you need to have a DVD collection. If you want to cancel your thin client you have to build a PC. Most modern consumers live on a knife edge where $20/month isn't perceptible but $1000 is a major expense.

The classic VC-backed model is to subsidize the subscription until people become complacent, and then increase the price once they're dependent. People who self-host are nutjobs because the cloud alternative is "cheaper and better" until it stops being cheaper.


My bank has an option to send me a notification every time I'm charged for something. I've noticed several bills that were higher than they should have been "due to a technical error". I'm certain some companies rely on people not checking and randomly add "errors".

Notably there's no way (known to me) that you can have direct debits sent as requests that aren't automatically paid. I think that would put consumers on an equal footing with businesses though, which is obviously bad for the economy.


Wasn’t aware about charge notifications. Looks like my bank supports that - thanks for the info!


> My bank has an option to send me a notification every time I'm charged for something.

Wait, your bank doesn't do that by default? I've always assumed it's default behavior of most banks.


It's normally an option in my experience. I have mine set for charges over $100. I don't want a notification every time I buy gas (I do check my statements every month though).


What is the harm in being notified when you buy gas? It doesn’t hurt anything, and I DO want to be notified if someone else buys gas on my card!

The discussion started as a way to avoid forgetting to cancel subscriptions or to catch subscription price increases; if you are setting your limit to $100, you aren’t going to be seeing charges for almost all your subscriptions.

I have my minimum set to $0, so I see all the charges. Helpful reminder when I see a $8 charge for something I forgot to cancel.


Alert fatigue. Most people, if they get an alert for every single purchase they make, will learn to ignore the alerts as they are useless 99% of the time. Then when an alert comes through that would be useful, they won't see that either.

Anyone who has had the misfortune to work on monitoring systems knows the very fine line you have to walk when choosing what alerts to send. Too few, or too many, and the system becomes useless.


As I said, I have my alert set to $0 and it really hasn’t caused fatigue. For one thing, when it is something i just purchased, the alert is basically just a confirmation that the purchase went through. I close it immediately and move on.

If I get an alert and I didn’t buy anything, it makes me think about it. Often times it just reminds me of a subscription I have, and I take the moment the think if I still need it or not. If I start feeling like I am getting a lot of that kind of alert, I need to reevaluate the number of subscriptions I have.

If I get an alert and I don’t immediately recognize the source (the alert will say the amount and who it is charged to), it certainly makes me pause and try to figure out what it is, and that has not been “alert fatigued” away from me even after 10+ years of these alerts.

Basically, if I get an alert when I didn’t literally JUST make a purchase, it is worth looking into.

I dont think it causes alert fatigue; I am not getting a bunch of false alerts throughout my day, because I shouldn’t be having random charges appear if I am not actively buying something.


You had to opt in for my bank, and choose a minimum amount to be notified for. I chose $0, so I get notified for everything.


If it's random surely you'd get a discount sometimes!


> The alternative to a subscription is usually capital-intensive - if you want to cancel Netflix you need to have a DVD collection.

I did Apple Music and Amazon Music. The experience of losing “my” streaming library twice totally turned me off these kinds of services. Instead I do Pandora, and just buy music when I (rarely) find something I totally love and want to listen to on repeat. The inability to build a library in the streaming service that I incorrectly think of as “mine” is a big feature, keeps my mental model aligned with reality.


I do wish these services would have an easier method to import/export playlists and collections. But that would make it easier to leave, so its not going to happen.


I don’t know about Amazon but having migrated to/from both Spotify and Apple Music they are both ludicrously easy to export playlists/libraries from.


At least with Apple Music you can cmd-a cmd-c cmd-v playlists into a CSV file or something.


Yet you can't search anything unless you have a 100% exact match


> if you want to cancel Netflix you need to have a DVD collection

You don't need a whole DVD collection to cancel Netflix, even ignoring piracy. Go to a cheaper streaming service, pick a free/ad supported one, go grab media from the library, etc. Grab a Blu-Ray from the discount bin at the store once in a while, and your collection will grow.


Music is different, but I never understood buying movies. Once I see a movie, I've seen it. I very rarely watch a movie more than once.


It really depends on the movies you're watching and how you watch them. I've watched "It Follows" like 4 times in the past year to show it to different people. I would watch The Shining every year at Halloween, and It's a Wonderful Life at Christmas. On the other hand, sometimes you just want to throw on one of your comforting favorite movies in the background.

There's also a media preservation angle - you can imagine the monopoly media companies of the next decade not wanting to stream "My Own Private Idaho" or "Female Trouble".


Maybe you would, if you bought it.


I've bought a ton of movies in the past. The vast majority I've sold second hand or thrown away because I just didn't care to watch again and I didn't feel like storing something I'd never use forever.

Same goes for a lot of other media. Some amount of it I'll want to keep but most is practically disposable to me. Even most videogames.


No, I do own some (actually it was more in the VHS days so tapes) and I just found that I never really watched them again. So I stopped buying movies. I'm the same with books. Once I read it, I've read it. I would rarely read a novel twice. I know what's going to happen, so what's the point? Reference books are different of course.


Some of us just consume media differently I suppose. I'm a big fan of going back to re-read/re-watch a lot of my favorite media. Sometimes it's because it got a new volume/season/movie came out years later so I'll take time to re-experience the original media to get ready for it. Never really had an issue experiencing something again and having it feel fresh because its been a few years.

I will admit that re-reading books has become less of a habit the older I get because it is time consuming to get through a longer series again.


I'm mostly the same, I don't watch movies twice. But there are exceptions. Some movies are just beautiful or I like how they make me feel, so I want to rewatch them. Groundhog Day is an example.


You're not really thinking this through enough. The exact same logic you used can be applied to music: once you've listened to the album once, you know how it will go, so what's the point of listening again? Presumably you do get something out of listening to music again (since you said you do listen to it more than once), so whatever that "something" is... you can infer that others get similar value out of rereading books/rewatching movies, even if you personally don't.

For myself, the answer is "because the story is still enjoyable even if I know how it will end". And often enough, on a second reading/viewing I will discover nuances to the work I didn't the first time. Some works are so well made that even having enjoyed it 10+ times, I can discover something new about it! So yes, the pleasure of experiencing the story the first time can only be had once. But that is by no means the only pleasure to be had.


> The exact same logic you used can be applied to music: once you've listened to the album once, you know how it will go, so what's the point of listening again?

Most music doesn't have the same kind of narrative and strong plot that stories like novels and movies do, this is a massive difference. And even if it does, it doesn't usually take a half hour or more to do such a change. That's a pretty big difference about the types of art.


This is something I’ve been seeing for a while. As a teen that kept his 300 dollar paycheck in cash that money would last a very long time. Now I make a good 6 figures and was seeing my accounts spending way more than I should. It wasn’t big purchases it was 50 dollars here 200 hundred there. A subscription here and there. By the end of the month I would wrack 8k in spending.

Going line by line I learned how much I neglected these transactions being the source of my problem. Could I afford it? Yes. But saving and investing is a better vehicle for retirement early than these minor dopamine hits


You can only boil the frog until it dies. If there isn't a true dependency relationship then at some point the industry will die.

In the 2010's, when short on money, I noticed my cable+Internet package was above $200. I took a look at things and cut the TV service, keeping the Internet.

Movies and theaters thought they were untouchable until they weren't. Games can keep increasing their subscription fees until people just stop playing them. There was a world before video games, after all.


Sure but modern cloud subscriptions have a lot of service layers you otherwise won't pay for so effectively you may be buying the hardware yearly that's a lot different than renting a media collection that would be assembled over a lifetime for the price of one new item a month.


> Subscriptions have a "boiling frog" phenomenon where a marginal price increase isn't noticable to most people.

This is so apt and well stated. It echos my sentiment, but I hadn't thought to use the boiling frog metaphor. My own organs are definitely feeling a bit toastier lately.


the best con of all is convincing ppl they need Hollywood in their lives to be happy


Difference is that if subscription goes up from $10 to $15, that doesn't seem to bad.

But if you want to purchase a new computer, and the price goes from $1000 to $1500, then that's a pretty big deal. (Though in reality, the price of said computer would probably go up even more, minimum double. RAM prices are already up 6-8 fold from summer)


Building a PC price is not double lol and RAM is nowhere near up 6-8x

https://www.bestbuy.com/product/crucial-pro-overclocking-32g...

That 32GB for $274 was not $34-$45 in the summer. RAM is up like 3x, but RAM is one of the cheaper parts of the PC.

RAM that was $100 in summer is like $300 now when I look. So that's an extra $200 maybe $300, on say a $1500 build.

GPUs are not up, they are still at MSRP:

https://www.bestbuy.com/product/asus-prime-nvidia-geforce-rt...

SSDs are up marginally, maybe $50 more lets say for a 2TB.

So from summer you are looking at like a $250-350 increase on say a $1500 PC


Where I live, a pair of Kingston FURY Beast Black RGB DDR5 6000MHz 32GB (2x16GB) has literally gone up from what is equivalent to $125 this summer, to currently selling for what is equivalent to $850.

Obviously this depends on where you live.


I think looking at the same exact product from the same retailer is not really the full story. Personally I would accept looking at the same exact spec ram across retailers in your region. Maybe its still a lot more for you, but in the US it's not as bad as I see people say.

Realistically people normally buy whatever ram is the cheapest for the specs they want at the time of purchase, so that's the realistic cost increase IMO.


Here is some proper data:

https://pcpartpicker.com/trends/price/memory/

The same site also has price trends for CPUs, video cards, etc.


It's OK data, but the average can be skewed high by some vastly inflated options that nobody cares about.

Most people will pick a ram spec and buy whatever is the cheapest kit for that spec at the time.

I think the best data view would be what is the cheapest available kit for each spec over time rather than the average price of each kit.


Wouldn't historical data also be inflated by the gold plated Monster branded RAM sticks too though? Making the now to then comparison, well, comparable.


Not sure I agree with this.

Something that skewed the market in the past, which is unrelated the the reasons the market is skewed now doesn't make a compelling comparison.

The reasons would lead to differing levels of market skew.


Corsair Vengeance 128 GB (2 x 64 GB) DDR5-6400. $339 in Sept 2025. $1599 in Jan 3026. 4.7x increase. https://pcpartpicker.com/product/LPvscf/corsair-vengeance-12...

I cancelled my plans to upgrade my workstation, as the price of 256 GB of RAM became ridiculous.


The MRSP for those GPUs is already inflated. There's a reason Nvidia is going to start making more RTX 3060 GPUs. Because people (and system builders) can't afford 40XX and 50XX GPUs.


I paid $150 for a 64GB DDR5 in Jan 2025. That is today $830 representing 5.5x.


What are the specs of the kit?


Difference is subscriptions need to support IT staff, data centers, and profit margins. A computer under your desk at home has none of those support costs and it gets price competition from used parts which subscriptions don't have.

Cloud (storage, compute, whatever) has so far consistently been more expensive than local compute over even short timeframes (storage especially, I can buy a portable 2TB drive for the equivalent of one year of the entry level 2TB dropbox plan). These shortage spikes don't seem likely to change that? Especially since the ones feeling the most pressure to pay these inflated prices are the cloud providers that are causing the demand spike in the first place. Just like with previous demand spikes, as a consumer you have alternatives such as used or waiting it out. And in the meantime you can laugh at all your geforce now buddies who just got slapped with usage restrictions and overage fees.


Subscription is still worth it for most people though. Sure it costs more, but your 2TB plan isn't a single harddrive, it is likely across several harddrives with RAID ensuring that when (not if!) they fail no data is lost, plus remote backups. When something breaks the subscription fixes that for no extra charge.

If you know how to admin a computer and have time for it, then doing it yourself is cheaper. However make sure you are comparing the real costs - not just the 2TB, but the backup system (that is tested to work), and all your time.

That said, subscriptions have all too often failed reasonable privacy standards. This is an important part of the cost that is rarely accounted for.


I’m not even sure it does cost more. I could have a geforcenow subscription for like 8 years before it’s more expensive than building a similar spec gaming rig.


Depends on the service, and timeframes. For geforcenow, you also need to consider the upgrade cycle - how often would you need to upgrade to play a newer game? I'm not sure but probably at least once within that 8 years. Buying a new car, or almost new car, and driving it until it falls apart is a better financial option than leasing. But if you want a new car every year or two, leasing is more affordable - for that scenario. Also it depends on usage. My brother in law probably plays a video game once every other month. At that point, on demand pricing (or borrowing for me) is much better than purchase or consistent subscription. You need to run the numbers.


Depends on how much you play. geforcenow is limited to 100 hours a month, with additional hours sold at a 200% premium. This dramatically changes the economics ( https://www.techpowerup.com/344359/nvidia-puts-100-hour-mont... has a handy chart for this )


Yikes. If you're playing more than 100 hours a month then it's time to touch grass.


I'm not sure what the value of shaming people's hobbies is. 3 hours a day is easy if it's your primary hobby, and likely double/triple that on weekends.


The business model of Facebook, YouTube, traditional TV, app stores, and yes video games depend on people not touching grass.


If you live alone by yourself such that the concept of a shared gaming system is entirely unimaginable, maybe you're the one that needs to touch grass


> Sure it costs more, but your 2TB plan isn't a single harddrive, it is likely across several harddrives with RAID ensuring that when (not if!) they fail no data is lost, plus remote backups. When something breaks the subscription fixes that for no extra charge.

Well yes, of course. And for cloud compute you get that same uptime expectation. Which if you need it is wonderful (and for something like data arguably critical for almost everyone). But if we're just talking something like a video game console? Ehhh, not so much. So no, you don't include the backup system cost just because cloud has it. You only include that cost if you want it.


> ...that doesn't seem too bad.

Yep, "seem". But the reality is more like 3 different subscriptions going up by $5/month, and the new computer is a once-in-4-years purchase:

$5/month * 3 subscriptions * 48 months = $720.00

And no bets on those subscriptions being up to $20 or so by the end of year 4.


But if you finance the computer (not hard to get 0% financing on consumer electronics), the price goes from $41 a month to $62 a month. It’s the same difference.


For whatever reason, people are just more into paying for $10 / subscription fees, than they into financing stuff.

To such a degree that they able to pay for a bunch of subscriptions they completely forget about.


The mental model of subscriptions and financing are totally different. If I'm paying a subscription I might cancel next month, and that's a sort of freedom. If I'm financing a piece of hardware I don't want to stop paying, I want that hardware, so that's a commitment.


The difference is that with financing you're stuck with it (and your credit rating drops, at least in the EU here). You're not stuck with a subscription. If your income changes and you can't afford it anymore then you can cancel your subscription.


In the US if you don't have any debt, that is bad for your credit rating. Perversely, the more debt you have, the easier it is to get more credit, at least up to a point.


Oh sure, my original comment’s point was just to allude to the point that costs are going up for all methods of compute, so that fact alone shouldn’t influence your buy versus rent versus finance decision too much.

This idea that there’s a conspiracy to take personal computing away from the masses seems far fetched to me.


> people are just more into paying for $10 / subscription fees, than they into financing stuff

I'm not so sure, seeing the explosion of Buy Now Pay Later (BNPL) platforms.


The common thread is that people are bad and saving and delayed gratification. The easiest path to instant gratification wins more often than not.


For some reason I think people are less likely to finance a computer, but maybe not, given that phone financing is a thing.


Just go to the Apple website and notice how every computer price has a monthly payment option in the same font size right under the purchase price.



> wait the price hikes out for a couple years

Or much longer. The computers I use most on a daily basis are over 10 years old, and still perfectly adequate for what I do. Put a non-bloated OS on them and many older computers are more than powerful enough.


And Linux being good for gaming now (thanks Valve!) makes this option more attractive than ever.

And the inability to run rootkits is a bonus, not a drawback.


Netflix says hello lol

Every increasing prices

Password sharing forbidden

Etc etc

And still making more and more money.

People are willing to take a beating if they are entertained and pay a lot more


There must be a breaking point. We’ve reached ours last year, when price went up again and my grandfathered plan wasn’t accepted anymore. So I talked to the missus and we cancelled our Netflix.

It had been my account for, what, a decade? A decade of not owning anything because it was affordable and convenient. Then shows started disappearing, prices went up, we could no longer use the account at her place (when we lived separately), etc. And, sadly, I’m done with them.

I think most people will eventually reach a breaking point. My sister also cancelled, which I always assumed would never happen.


> I don't know how everyone arrives at that conclusion when the cost of the subscription services is also going up

Of course they will go up, that's the whole idea. The big providers stock on hardware, front-run the hardware market, starve it for products while causing the prices to rise sharply and at that point their services are cheaper because they are selling you the hardware they bought at low prices, the one they bought in bulk, under cheap long term contracts and, in many cases, kept dark for some time.

Result - at the time of high hardware prices in retail, the cloud prices are lower, the latter increase later to make more profits, and the game can continue with the cloud providers always one step ahead of retail in a game of hoarding and scalping.

Most recently, scalping was big during the GPU shortages caused by crypto-mining. Scalpers would buy GPUs in bulk then sell them back to the starved market for a hefty margin.

Cloud providers buying up hardware at scale is basically the same, the only difference is they sell you back the services provided by the hardware, not the actual gear.


People rent things they can’t afford to own.


That's one common reason for renting, not the only one.

I've rented trailers and various tools before too, not because I couldn't afford to buy them, but because I knew I wouldn't need them after the fact and wouldn't know what to do with them after.


You could never afford a hyperscaler size data center. You could always just buy a server and stick it into Colo. it’s just not totally the same thing


I can't afford the ram and storage in the server anymore though. So it kinda is the same thing.


Rent vs own isn’t the issue then. It’s the cost of the underlying asset.


I also couldn't afford to rent one.

I can afford to rent fractional use of one, but by that token I could also afford to buy a very small fraction of one too.


Is this true? I'm trying to think of a solid example and I'm drawing blanks.

Apartments aren't really comparable to houses. They're relatively small units which are part of a larger building. The better comparison would be to condominiums, but good luck even finding a reasonably priced condo in most parts of the US. I'd guess supply is low because there's a housing shortage and it's more profitable to rent out a unit as an apartment than to sell it as a condo.

It seems to me that most people rent because 1) they only need the thing temporarily or 2) there are no reasonable alternatives for sale.


Check out the furniture rental industry. Car rental (lease) industry. Electronics rental industry.


Lots of people rent houses, it’s not just apartments.


And even ~40% of so-called homeowners rent money in order to be there.


or just not ready for commitment (try first, buy later, etc...)


Exactly, if you can’t afford the high upfront cost that you can stretch out over a longer period of time, you’re stuck paying more over the long term as the subscriptions get more expensive.


Because the World Economic Forum, where our political and corporate leaders meet and groom each other, point-blank advertised "you will own nothing and be happy."


We’ve been raised to believe “experiences make you happy, not things.”

Everything as a service is the modern marketing ideal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: