Hacker Newsnew | past | comments | ask | show | jobs | submit | hudo's commentslogin

UI of Windows is buggy and inconsistent. Kernel and low level stuff are actually very stable and good.


>Kernel and low level stuff are actually very stable and good.

This. A while ago a build of Win 11 was shared/leaked that was tailored for the Chinese government called "Windows G" and it had all the ads, games, telemetry, anti-malware and other bullshit removed and it flew on 4GB RAM. So Microsoft CAN DO IT, if they actually want to, they just don't want to for users.

You can get something similar yourself at home running all the debloat tools out there but since they're not officially supported, either you'll break future windows updates, or the future windows updates will break your setup, so it's not worth it.


Something similar, or indeed, exactly the same:

https://www.windowscentral.com/software-apps/windows-11/leak...


Talked about back in the Vista days publicly (I cannot find the articles now) - Microsoft has commitments to their hardware partners to help keep the hardware market from collapsing.

So they are not incentivized to keep Win32_Lean_N_Mean, but instead to put up artificial limits on how old of hardware can run W11.

I have no insider knowledge here, just this is a thing which get talked about around major Windows releases historically.


If anything, Microsoft has a lot of problems because they support a wide variety of crappy hardware and allow just about anyone to write kernel level sw (drivers). Not sure if this changed, but they used to run in the ring0 even.

This was most evident back in the 90s when they shipped NT4: extremely stable as opposed to Win95 which introduced the infamous BSOD. But it supported everything, and NT4 had HW support on par with Linux (i.e. almost nothing from the cheap vendors).


NT4 started with a kernel mode, user mode, security model and drivers had to be written and validated accordingly.

9x, me, and even compatibility parts of XP (up to some service patch IIRC? Might have been SP2) would still allow dos mode realtime BS for any driver that wanted.

I loath all the dang software modems too cheep to ship a decent device in a single unit and instead slice off the user's already constrained resources.


Heh, who else remembers the golden benchmark, a US Robotics 56k hw modem (the only one I could find locally was an external one too) to get online in either NT4 or Linux. But when I finally did save for one, I could fully leave Windows behind in 1998.


>Microsoft has commitments to their hardware partners to help keep the hardware market from collapsing.

Citation needed since that makes no logical sense. You want to sell your SW product to the most common denominator to increase your sales, not to a market of HW that people don't yet have. Sounds like FUD.

>but instead to put up artificial limits on how old of hardware can run W11

They're not artificial. POPCNT / SSE4.2 became a hard requirement starting with Windows 11 24H2 (2024) (but that's for older CPUs), and only intel 8th gen and up have well functioning support for Virtualization-Based Security (VBS), HVCI (Hypervisor-protected Code Integrity), and MBEC (Mode-Based Execution Control). That's besides the TPM 2.0 which isn't actually a hard requirement or feature used by everyone, the other ones are way more important.

So at which point do we consider HW-based security a necessity instead of an artificial limit? With the ever increase in vulnerabilities and attack vectors, you gotta rip the bandaid at some point.


Windows 11 is running on my ThinkPad T530. Its CPU is very nearly 14 years old.

What is missing here that was present when this same computer was running Windows 10?


>Windows 11 is running on my ThinkPad T530. Its CPU is very nearly 14 years old.

Yes, you can bypass HW checks to install it on a pentium 4 if you want, nothing new here.

>What is missing here that was present when this same computer was running Windows 10?

All the security features I listed in the comment above.


So, if I'm hearing this right:

This computer had the security features that you listed while it was running Windows 10, and now that it is running Windows 11 it is lacking them?

(I'm not trying to be snarky. That's simply an astonishing concept to me.)


It hadn’t. Windows 11 has them, due to support for new hardware mitigation features. What is it you don’t understand in particular?


There's a lot here that is hard to understand:

> > What is missing here that was present when this same computer was running Windows 10?

> All the security features I listed in the comment above.


> You want to sell your SW product to the most common denominator to increase your sales, not to a market of HW that people don't yet have.

A key difference between regular software and Windows is that almost nobody buys Windows, they get it pre-installed on a new PC. So a new PC purchase means a new Windows license.


You are just arguing the requirements are the requirements.

Are they as important as stated? Microsoft says so. Everyone here loves and trusts them, right?


Is this not just Windows LTSB/LTSC? Which has been a thing forever.


Maybe, could also be that for a 9 figure government contract they'll provide a custom LTSC branch just for you with only the features you want.


I geniunely wonder if Windows G's start menu also use React and if the start menu, right click or Windows Search still sucks in Windows G or not :)


React Native, halfway between Web and native.


No, he's talking about ReactOS.


Microsoft should just open source Windows at this point.


Never heard of Windows G .. that sounds exactly what I want for my older Thinkpads :-)


I've been starting with Tiny11 and then running the debloat scripts against it. Reduces the memory footprint to about 2GB and have found zero compatibility problems with doing this. You just have to use curl or something to download a browser because you won't even have Edge.


> Windows G .. sounds exactly what I want for my older Thinkpads

I'm running 11 IoT Ent LTSC on a some T420; it runs pretty okay.


> Kernel and low level stuff are actually very stable and good.

In their intended applications, which might or might not be the ones you need.

The slowness of the filesystem that necessitated a whole custom caching layer in Git for Windows, or the slowness of process creation that necessitated adding “picoprocesses” to the kernel so that WSL1 would perform acceptably and still wasn’t enough for it to survive, those are entirely due to the kernel’s archtecture.

It’s not necessarily a huge deal that NT makes a bad substrate for Unix, even if POSIX support has been in the product requirements since before Win32 was conceived. I agree with the MSR paper[1] on fork(), for instance. But for a Unix-head, the “good” in your statement comes with important caveats. The filesystem is in particular so slow that Windows users will unironically claim that Ripgrep is slow and build their own NTFS parsers to sell as the fix[2].

[1] https://lwn.net/Articles/785430/

[2] https://nitter.net/CharlieMQV/status/1972647630653227054


This is on the mark.

But there's another issue which is what cripples windows for dev! NTFS has a terrible design flaw which is the fact that small files, under 640 bytes, are stored in the MFT. The MFT ends up having serious lock contention so lots of small file changes are slow. This screws up anything Unixy and git horribly.

WSL1 was built on top of that problem which was one of the many reasons it was slow as molasses.

Also why ReFS and "dev drive" exist...


> NTFS has a terrible design flaw which is the fact that small files, under 640 bytes, are stored in the MFT.

Ext4 also stores small (~150B) files inside the inode[1], and so do a number of other filesystems[2]? NTFS was unusually early to the party, but if you’re right that it’s problematic there then something else must also be wrong (perhaps with the locking?) to make it so.

[1] https://www.kernel.org/doc/html/latest/filesystems/ext4/inli...

[2] https://en.wikipedia.org/wiki/Comparison_of_file_systems#All..., the “Inline data” column.


This is not due to slowness of the file system. Native ntfs tools are much faster than Unix ones in some situations. The issue is that running Unix software on windows will naturally have a performance impact. You see the same thing in reverse using Wine on Linux. Windows uses a different design for IO so requires software to be written with that design in mind.


> Native ntfs tools are much faster than Unix ones in some situations. The issue is that running Unix software on windows will naturally have a performance impact. You see the same thing in reverse using Wine on Linux.

Not true. There are increasingly more cases where Windows software, written with Windows in mind and only tested on Windows, performs better atop Wine.

Sure, there are interface incompatibilities that naturally create performance penalties, but a lot of stuff maps 1:1, and Windows was historically designed to support multiple user-space ABIs; Win32 calls are broken down into native kernel calls by kernel32, advapi32, etc., for example, similar to how libc works on Unix-like operating systems.


It's pretty typical these days for software, particularly games of the DX9-11 eras to perform better on Wine/Proton then they do under native Windows on the same hardware.


They rarely are IO constrained.

The file system isn't slow. The slowness will be present in any file system due to the file system filters that all file system calls pass though.


Right, by “file system” here I mean all of the layers between the application talking in terms of named files and whatever first starts talking in terms of block addresses.

Also, as far as my (very limited) understanding goes, there are more architectural performance problems than just filters (and, to me, filters don’t necessarily sound like performance bankruptcy, provided the filter in question isn’t mandatory, un-removable Microsoft Defender). I seem to remember that path parsing is accomplished in NT by each handler chopping off the initial portion that it understands and passing the remaining suffix to the next one as an uninterpreted string (cf. COM monikers), unlike Unix where the slash-separated list is baked into the architecture, and the former design makes it much harder to have (what Unix calls) a “dentry cache” that would allow the kernel to look up meanings of popular names without going through the filesystem(s).


NTFS will perform directory B+-tree lookups (this is where it walks the path) until it finds the requested file. The Cache Manager caches these B+-trees.

From there, it hits the MFT, finds the specific record for the file, loads the MFT record, and ultimately returns the FILE_OBJECT to the I/O Manager and it bubbles up the chain back to (presumably) Win32. The MFT is just a linear array of records, which include file and directories (directory records are just a record with directory = true, essentially).

Obviously simplified. Windows Internals will be your friend, if you want to know more.


Thanks for the explanation! Linux, meanwhile, will[1] in the normal case walk a sequence[2] of hash tables (representing incomplete but up-to-date views of directories) before hitting the filesystem’s vtable or the block I/O layer at all, and on the fast path[3] taking no locks other than the RCU read lock.

[1] https://www.kernel.org/doc/html/latest/filesystems/path-look...

[2] I was under the impression that it could look up an entire path at once when I wrote my grandparent comment; it seems I was wrong, which on reflection makes sense given you can move directories.

[3] https://www.kernel.org/doc/html/latest/filesystems/path-look...


Heh, first I've heard of Windows Internals. New friends for The Linux Programming Interface!


Yes, won't be that quite in depth given no source code, but you can easily look up the NT4 source code on GitHub if you want to dive that deep. I would assume much of that code should still be relevant today.

Also worth tracking down a copy of the NT OS/2 Design Workbook on the web (another leak).

And Inside the Windows NT File System by Helen Custer is a very short book but describes the very early state of NTFS capabilities/functions.


The Windows filesystem isn't slow per se, it's a slowness caused by "a thousand cuts" type of problem.

https://github.com/Microsoft/WSL/issues/873#issuecomment-425...


NTFS, not so great.


NTFS is just fine. Stable, reliable, fast, plenty of features for a general purpose file system.


Even with Defender etc off, it is not fun. Lots of small file IO brings it on its knees. Some wants to blame the Windows I/O system, I don't know, but what I do know is that when people choose NTFS it is because they haven't an alternative. Nobody chooses it based on its quality attributes. I dare to say there is no NTFS system that is faster than an EXT4 system.

If even MS internal teams rather want to avoid it, it seems like it isn't a great offering. https://news.ycombinator.com/item?id=41085376#41086062


NTFS on Linux should be near-par with ext4 on Linux.

Remember, I said the _file system_ was just fine. It's that extensible architecture above all file systems on NT that causes grief.

The only method to 'turn off' Defender is to use DevDrive, which enforces ReFS, and even then you only get async Defender, it's not possible to completely disable.


You can just turn off Defender using a group policy.

NTFS is infamous for being super slow. Even using EXT4 through WSL is faster.


...But no way can you wrap it into something that looks posix-y from the inside


Why would you want to?


From the article, first use case:

> Example use cases include:

> * Running unmodified Linux programs on Windows

> * ...

That won't work if the unplugged Linux program assumes that mv replaces a file atomically; ntfs can't offer that.


NTFS uses atomic transactions, that's the only way it has the ability to recover after a fault.

You can read more if you wish in 'Inside the Windows NT File System' by Helen Custer, page 15.


How come your business or research are so tied to GCP? What about other providers?


This is an asinine question. Even if you build agnostic solutions (like a docker image), you have storage resources, networks, configs, ACLs, snapshots and more all trapped inside GCP. we’re human — we forget to backup things, or push important commits . And we know cloud solutions quickly develop lock-in – even a simple cloud DB instance locks you into the vendors config .

So there are at least a dozen perfectly good reasons this guy is panicking that his account was suddenly revoked without warning.


It's not asinine. This suspension happened two years ago. I don't see any sudden panicking.


Appeals take time. And it’s not an uncommon case . It doesn’t make his desire to recover the resources any less valid .


My booking.com latest experience: booked big appartment for 4 people. Arrived to destination (Bristol, UK), and apartment already had guests inside. Tried contacting landlord, no reply. Called booking com, they offered acommodation 30km from the city centre, and its already 11pm, no way to get there. Had to pay our own hotels, and we never got money paid to booking. One neighbor of that apartment said they often double book! Seems booking com doesn't care.


I recently had a hotel try to scam me through the official Booking.com messages. Knew my phone number (they also WhatsApp'd me), me booking dates and email address (got more attempts via email). Spoke at length with Booking.com and, despite using them virtually weekly for fifteen years, they did not give a crap and then stopped responding to my emails. All I wanted them to do is get the hotel to refund my booking about two weeks before the date as I no longer felt comfortable staying somewhere that would leak my details.

I will not use them again, just like I do not use Travelodge any more since they repeatedly double-booked my rooms. I feel like, eventually, I'm going to run out of brokers to use. Perhaps I just need to book direct with a handful of hotels.


This happens more often and especially late at night you really don't want shit like this, so indeed, I will not use booking.com. When I had this with other services, they just fixed it, at their expense, booking doesn't even fake that they care at all.


What did we do wrong when simple chat app requires hundrets of megabytes of memory, or even gigabytes! Sure, its not just plain text but images and videos, but still...


Opened samples - bunch of Json config files. Closed Samples. Do they really expect devs to write Json to configure worksflows and tasks!? Even Workflow Foundation had more c# I think...


Aaaah says no angle sensor detected on my MB Air M1 :( This is it i am buying new model


Can't remember how many times I added this to AUTEXEC.BAT

SET BLASTER=A220 I5 D1 T6

good times.


AI is great, I can see it benefit so many industries, except music. There's something profoundly wrong with AI generated music.


Less LOC than React/Redux app... Makes you think, what were we doing last 30 years :/


I’m pretty sure an app as mediocre as this would take up less code in React, or even plain JavaScript. The UI is a single table and a few inputs and buttons, and its main way of communicating with the outside world is message boxes - trivial to do in a web browser.


Netflix is "4K Ultra HD: Up to 7 GB per hour". Blu ray is 25GB per side, so max 50GB for 2 layers. Typical movies are 35-50GB. So, BR, and think even DVD still looks much better than any streaming service!


Sony’s streaming service is 80 Mbps or 36 GB per hour.

We’re going to have to disagree about DVDs though. They look awful on modern (big) televisions.


> Blu ray is 25GB per side, so max 50GB for 2 layers

Are pressed Blu-Rays limited compared to writeable ones?

I have 100GB BDXL blanks (single-sided) I use as one of the archives for my family photos/videos.

Couldn't a film BluRay also be 100GB on a single side?


Plenty of movies have been released on BD100.

Very out of date list: https://forum.blu-ray.com/showthread.php?t=294596

On a site that I am a member of there are nearly 1300 BD100 rips available.


Interesting. I was looking back at my BluRay collection (physical) the other day, looking for a UHD movie to test with, and in my memory, all BDs with UHD, but to my surprise, very few of them were actually UHD, with most just being HD (1080p). I doubt there's in in my collection that are BD100; could I even play them? Currently using a PS5 as my BD player, and PS4 and PS3 before that.


A PS5 can play UHD Blu-Ray, PS3 and PS4 (even the Pro) can’t.

UHD discs are fairly noticeable at a distance as they usually use black disc cases instead of blue. They’re somewhat niche (if Blu-ray wasn’t already niche) and often sell at a premium, so I suspect unless you’ve been seeking them out you won’t have them barring the odd multi format bundle.


A lot more practical than having to deal with physical media. I'd even pay them for it, to have that kind of premium access.


100GB discs won’t work on standard Blu-Ray players, the basic standard predates BDXL discs. Ultra HD 4K player can play them.


And Netflix HD (1080) is hardly what one would expect. It may be technically 1080p but the bit rate is often quite low. Most people don't notice or care.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: