Hacker Newsnew | past | comments | ask | show | jobs | submit | Xeoncross's commentslogin

I really wish more people wanted screens that looked as good as their cellphone.

Bright, sharp text, great color. We've had the great Apple Studio Display for years now, it's about time others came to fix some of it's short-comings like 27" size, 60hz and lack of HDMI ports for use with other systems.

So many of us have to stare at a screen for hours every day and having one that reduces strain on my eyes is well worth $1-3k if they'd just make them.


The company I work at gives all new developers a pair of 1080p displays that could have come right out of 2010.

It amazes me, and it’s so sad. They have no idea what they’re missing. I’m sure high PPI would pay off fast in eye strain. And it’s not like monitors need replacement yearly. Tons of time to recoup that small cost.

I’m not arguing for $2k 37” monitors, just better than $200 ones.


Even $200 will already buy a 4K 27" (LG). Which aren't even bad. I swear by HiDPI as well but my work is the same. 1080p displays and really bad contrast screens too. Definitely not TN (they're not that bad) and not VA (they tend to have way better contrast than IPS). Probably just bottom barrel IPS.

Just about every company does something like this.

At one point in my career, I just started buying my own monitors and bringing them into work.

I remember when ~19 or 20" was the norm, and I bought a dell 30" 2560x1600 monitor. Best $1400 I ever spent, used it for years and years.

(I still have it although I retired it a few years back because it uses something called dual-link DVI which is not easily supported anymore)

I think if you are an engineer, you should dive headlong into what you are. Be proactive and get the tools you need. Don't wait for some management signoff that never comes while you suffer daily, and are worse at your job.


I work for a white-shoe law firm in Boston. I and most of my peers have total compensation approaching $500k.

And we have 1280x1024 monitors from the 00s, and we're not allowed to have anything better, even out of our own pockets, because "that's what we use here".


If you reach the point where you want to replace your in house IT with a company that will give you good tech and good tech support, let me know. I know a few people.

Out of all the places I would think would want fancy monitors to show off how fancy they are…

Too much creative vibes, fancy monitors do not necessarily communicate seriousness.

So you work at the same company I do? They just “upgraded” us to curved Dell UltraWides with a PPI of 110.

Unsurprisingly this is not a motivating factor to come back to the office, given I have a 220 PPI 6K at home.


> I’m sure high PPI would pay off fast in eye strain.

But we have gray on gray, to compensate. One even has a choice. Do you want light or dark eye strain ?


penny pinching on some monitors for the devs that cost a fortune...

:(


the old monitors still work. its a waste to throw them away. something like that?

I’ve never heard a specific reason, but I fully suspect “it’s fine, why do you need that?” to be the answer.

May be people who have never used HiDPI. Maybe they’ve seen it and don’t get the hype. Maybe they’re just penny pinching. IDK.


I don't think people care all that much about phones. It's just that phones are power-constrained, so manufacturers wanted to move to OLEDs to save on backlight; and because the displays are small, the tech was easier to roll out there than on 6k 32-inch monitors.

But premium displays exist. IPS displays on higher-end laptops, such as ThinkPads, are great - we're talking stuff like 14" 3840x2160, 100% Adobe RGB. The main problem is just that people want to buy truly gigantic panels on the cheap, and there are trade-offs that come with that. But do you really need 2x32" to code?


The other thing about phones is that you have your old phone with you when you buy a new one, so without even really meaning to you're probably doing a side by side direct comparison and improvements to display technology are a much bigger sales motivator.

This is the insight that sold a billion iPhones. They were obsessed with what happens when you’re at the store, and you don’t need a new phone, and you pick one up, and…

Most people, including people who work professionally with computers, spend more time per day looking at their phones than they do at their screens.

I expect people are VERY sensitive to mobile phone screen quality, to the point that it's a big factor in phone choice.


Outside Thinkpads IPS is basically the cheap/default option on laptops, with OLED being the premium choice. With Thinkpads TN without sRGB coverage is the cheap/default option, with IPS being the premium choice.

I'm just getting my new ThinkPad tomorrow with an OLED screen. The X1 Carbon. I haven't seen TN film in ThinkPads for years.

But yes, you are right, they are conservative on new tech in the ThinkPad lineage.


Which ThinkPad has a 14" 4K 100% Adobe RGB compliant IPS display?

As far as I can see, 4k Thinkpad IPS are DCI-P3. There are Yogas with 3.2k Adobe RGB tandem OLEDs.

A fast color e ink would be possible but development would be very expensive for an unknown market. Would be a perfect anti eye strain second monitor though.

Dasung looks to be getting there!

I have 27" 5K monitors at home since I WFH. One reason I don't really want to RTO is because these monitors aren't standard yet even if they have been out for more than a decade now (and my FAANG employer won't spring for the good stuff). That and my mechanical keyboard would never work in an open office :P.

Is there a shortlist of top of the line utilitarian monitors that you can just buy, without researching or being some niche gamer?* Something similar to LG G-series TV's. Seems like Apple Studio, Dell UltraSharp are on that list. Any others?

*Struggling for words, but I'm looking more for the expedient solution rather than the "craft beer" or "audiophile" solution.


Keep in mind that normal OLEDs are quite bad for typical development tasks: lots of text with high contrast. Here is an example that would be unbearable for me: [1]. For text, IPS rules so far. For video and games, definitely OLED.

[1] https://www.savanozin.com/projects/qod


True for current OLED panels, but new OLEDs with LCD-like subpixel arrangements were just announced at CES. Those shouldn't have that problem.

https://news.lgdisplay.com/en/2025/12/lg-display-unveils-wor...


Many monitors use the same panels with only firmware differences. The panel technology IPS/VA/OLED/WOLED is what you shop for.

If you're a gamer QDOLED is best. If you do office work just get whatever is high resolution and makes text sharp.


If you truly don't want to research use rtings best monitor for X articles and find your budget and buy that one, if you feel the need to compare further pop the model numbers into your favourite LLM

32uq850 would be my choice if I were in Europe.

Isn't the main difference glossy vs matte? With glossy you get usually bright great color and that's what you get on cellphones and Macbooks as well. For some reason matte is still the preference when it comes to monitors and you can't escape their muted color palette.

> matte is still the preference when it comes to monitors

The larger screen size of a monitor is more likely to reflect lights than a mobile phone screen.


Also when glare does appear, it is harder to adjust the viewing angle because the user is not already holding the device.

I have trouble making out details on my 45" UWQHD (3440x1440) displays... so I don't see much point.. maybe slightly easier to read typefaces... I am already zooming 25% in most of the time.

On the plus side, I can comfortably fit my editor on half the screen and my browser on the other half.


Most people run in some hiDPI mode so text doesn’t become tiny

But 1440p on a 45” is not good PPI. That could be why you’re struggling to see text clearly


Or... it could be my general vision loss issues in that I can't hardly see...

I can't make out a native pixel as it is. I understand that if it had 2x the PPI that it might handle rendering better and that may help, some with visibility... I had a first generation MBP with retina display, and it was amazing. But that's not my issue here. Not to mention the trouble having my work laptop push effectively 4x the pixels if such a mythical beast existed.

The size is so that I can actually work with a single screen, editor on one side, browser on the other... it's almost like 2x 3:2 displays in one. For a workflow it's pretty good... I don't game much, but it's nice for that and content viewing as well. I had considered using side by side displays, like 2x 27" in portrait mode... but settled on this, which is working surprisingly well for now.


This interaction fully flushes out the problem. The math is not being done between size and resolution to determine PPI.

Yeah PPD is more useful, although for ultrawide I’ve also heard it’s common to have it closer than regular viewing distance, so that you can glance at side screens / information

I would love the apple studio display if the framerate wasn't actually crap.

Even doing programming, 60hz is not enough man.

Plus more peripheral ports.


I have a Studio Display and would also love if it had a much higher refresh rate, but only because I play WoW on it. Why isn't 60hz enough for programming? I don't think I notice the refresh rate at all when not playing a video game or watching videos.

When I move my mouse, or if I'm moving between full screens (terminal to IDE to browser), I see the stutter and slowness.

I code on a LG UltraGear OLED 4K HDR 240 Hz Gaming Monitor.

Could be my sensitivity though.


I'm personally not very sensitive to refresh rates, I only really notice it in video games and it wasn't enough to keep me from replacing my 120hz primary monitor with the Studio Display. I was just curious about why you prefer higher refresh rate for programming, thanks for answering!

> I see the stutter and slowness

Sounds like it is not maintaining 60 fps. Higher display refresh rate wouldn't fix that.


1-3k is 52 weeks of groceries for some people.

It seems rather silly to assume all people universally have the same needs, desires, and expenses. We don't live in the world of The Giver. I can accept that firefighters need a truck much more advanced and expensive than I ever will. It would be odd to compare that expense to how many pizza's I order each year.

> So many of us have to stare at a screen for hours every day and having one that reduces strain on my eyes is well worth $1-3k if they'd just make them.

I'm 53 y/o and didn't have glasses until 52. And at 53 I only use them sporadically. For example atm I'm typing this without my glasses. I can still work at my computer without glasses.

And yet I spent 10 hours a day in front of computer screens since I was a kid nearly every day of my life (don't worry, I did my share of MX bike, skateboarding, bicycling, tennis, etc.).

You know the biggest eye-relief for me? Not using anti-aliased font. No matter the DPI. Crisp, sharp, pixel-perfect font only for me. Zero AA.

So a 110 / 120 ppi screen is perfect for me.

Not if you do use anti-aliased font (and most people do), I understand the appeal of smaller pixels, for more subtle AA.

But yup: pixel perfect programming font, no anti-aliasing.

38" ultra-wide, curved, monitor. Same monitor since 2017 and it's my dream. My wife OTOH prefers a three monitors setup.

So: people have different preferences and that is fine. To each his own bad tastes.


Which font do you prefer for code?

We do, but many of us also want to play a game on occasion and GPUs can barely just handle 4k these days, let alone 6k+.

Anecdata but I played games at 4K on a 4GHz Haswell (2013) + 1080 Ti (2017). Definitely faster at 2K but 4K was servicable. It's probably less true now that I'm 1+ years away from the hardware, but 4K gameplay is surprisingly accessible for modest hardware IMO.

I currently have a 4k monitor (+nv4070-super) and it does handle some games fine at 4k but for others I need to use 2k w/ upscaling. Depends on the game.

So good news, there is a fair amount of monitors coming soon which are super high resolution that offer a "dual mode" which is lower resolution that has higher refresh rate. They are pretty cool.

That’s not really true. I tried out 5K which would reasonably be quite heavy, but honestly with a DLSS it’s super viable. If you get the gaming versions of these displays, they also have dual modes and in that mode a 6K display is going to be less heavy to run than a 4K one and a 5K display is going to be 1440p.

Upscaling tech has come a long way so games can run at lower internal resolutions than your monitor without looking complete crap on a LCD.

I game at 6K… I don’t play shooters so it’s fine. I turn off VSync, get 100FPS+ in my game of choice (WoW, admittedly and oldish engine). 4090 GPU.

some of these newer monitors support a lower native resolution as well, usually with a faster refresh rate. it's a nice feature

One of the things I wish more people talked about isn't just the language or the syntax, but the ecosystem. Programming isn't just typing, it's dealing with dependencies and trying to wire everything up so you can have tests, benchmarks, code-generation and build scripts all working together well.

When I use modern languages like Go or Rust I don't have to deal with all the stuff added to other languages over the past 20 years like unicode, unit testing, linting, or concurrency.

I use Go where the team knows Java, Ruby or TypeScript but needs performance with low memory overhead. All the normal stuff is right there in the stdlib like JSON parsing, ECC / RSA encryption, or Image generation. You can write a working REST API with zero dependencies. Not to mention so far all Go programs I've ever seen still compile fine unlike those Python or Ruby projects where everything is broken because it's been 8mo.

However, I'd pick Rust when the team isn't scared of learning to program for real.


I like Rust.

I don't like that for fairly basic things one has to quickly reach for crates. I suppose it allows the best implementation to emerge and not be concerned with a breaking change to the language itself.

I also don't like how difficult it is to cross-compile from Linux to macOS. zig cc exists, but quickly runs into a situation where a linker flag is unsupported. The rust-lang/libc also (apparently?) insists on adding a flag related to iconv for macOS even though it's apparently not even used?

But writing Rust is fun. You kind of don't need to worry so much about trivialities because the compiler is so strict and can focus on the interesting stuff.


Yeah, I've never seen an all-in-one language like Go before. Not just a huge stdlib where you don't have to vet the authors on github to see if you'll be okay using their package, but also a huge amount of utility built in like benchmarking, testing, multiple platforms, profiling, formatting, and race-detection to name a few. I'm sad they still allow null, but they got a lot right when it comes to the tools.

Everything is literally built-in. It's the perfect scripting language replacement with the fast compile time and tiny language spec (Java 900 pages vs Go 130 pages) making it easy to fully train C-family devs into it within a couple weeks.


Oh Go with Rust's result/none setup and maybe better consts like in the article would be great.

Too ba null/nil is to stay since no Go 2.

Or maybe they would? Iirc 1.21 had technically a breaking change related to for loops. If it was just possible to have migration tooling. I guess too large of a change.


Technically, with generics, you could get a Result that is almost as good as Rust, but it is unidiomatic and awkward to write:

    type Result[T, E any] struct {
        Val   T
        Err   E
        IsErr bool
    }

    type Payload string

    type ProgError struct {
        Prog   string
        Code   int
        Reason string
    }

    func DoStuff(x int) Result[Payload, ProgError] {
        if x > 8 {
            return Result[Payload, ProgError]{Err: ProgError{Prog: "ls", code: 1, "no directory"}}
        }
        return Result[Payload, ProgError]{Val: "hello"}
    }


Nullability is unavoidable in Go because of zero values.

https://go.dev/ref/spec#The_zero_value


C# is pretty close.


This was not the case for a long time. Actually it seems like it's fairly recently you get native AOT and trimming to actually reduce build sizes and build time. Otherwise all the binaries come with a giant library


Even back in .NET Core 3.1 days C# had more than competitive performance profile with Go, and _much_ better multi-core scaling at allocation-heavy workloads.

It is disingenuous to say that whatever it ships with is huge also.

The common misconception by the industry that AOT is optimal and desired in server workloads is unfortunate. The deployment model (single slim binary vs many files vs host-dependent) is completely unrelated to whether the application utilizes JIT or AOT. Even with carefully gathered profile, Go produces much worse compiler output for something as trivial as hashmap lookup in comparison to .NET (or JVM for that matter).


There’s cross-rs which simplifies things. But the main problem is less linker flags being unsupported and more cross compiling C dependencies somewhere in the dependency chain and that’s always a nightmare, not really anything to do with Rust (Go should have similar difficulties with cross compilation).


Fair take for nontrivial projects.

Buuut with Go one in general tends to reach less for dependencies so less likely to run into this and cgo is not go ;) https://go-proverbs.github.io

but for cross-compiling actually ended up filtering out the liconv flag with a bash wrapper and compiled a custom zig cc version with the support for exported_symbols_list patched in, things appear to work.

Should look into cross-rs I suppose. Hope it's not one of those "download macos sdk from this unofficial source" setups that people seem to do. Apparently not allowed by Apple.


Cross compiling to Apple products from non Apple products is going to run into the same hurdle around SDK setup as any other. There exists documentation but it’s probably not the easiest task. This limitation though applies equally to any library that depends on system C headers and/or system libraries.


I feel like we're talking in loops.

Go is generally fine for crosscompiling.

edit: what gave me pain with Rust for a cli was clap (with derive, the default). Go just worked.


> Go should have similar difficulties with cross compilation

It doesn't. Go code can be cross compiled for any OS and any CPU arch from any supported system. And it comes out of the box that way. You don't have to go out of your way to install or configure anything extra to do it.


We’re not talking about go here. This is true for rust. The issue is building against C libraries and APIs for a different OS. Unless go has done some magic I’m unaware of its the same problem, just cgo isn’t super popular in the Go community


The crates.io ecosystem for Rust... is like the amazing girlfriend that you go head over heels for, make her your wife, and then you meet the in-laws ... but it's too late now.

Unlimited access to a bunch of third party code is great as you're getting started.

Until it isn't and you're swimming in a fishing net full of code you didn't write and dependencies you do not want. Everything you touch eventually brings all of tokio along with it. And 3 or 4 different versions of random number generators or base64 utilities, etc. etc.


> However, I'd pick Rust when the team isn't scared of learning to program for real.

I've been learning Rust. It's elegant, and I am enjoying it.

The Rust people however are absolutely annoying though. Never have I seen such a worse group of language zealots.


I can't speak for other Rust programmers but I can speak for myself.

I obviously enjoy programming Rust and I like many of the choices it made, but I am well aware of the tradeoffs Rust has made and I understand why other languages chose not to make them. Nor do I think Rust functions equally as well in every single use case.

I imagine most Rust users think like this, but unfortunately there seems to be a vocal minority who hold very dogmatic views of programming who have shaped how most people view the Rust community.


Guess I am scared to program for real since I don't use a "real" language like Rust. What a wild statement to make.


This is such a big deal and I wish more people talked about it in these types of blog posts.

I used to be a Python programmer and there were two things that destroyed every project;

- managing Python dependencies

- inability to reason about the input and output types for functions and inability to enforce it ; in Python any function can accept any input value of any type and can return any type of value of any type.

These issues are not too bad if it's a small project and you're the sole developer. But as projects get larger and require multiple developers, it turns into a mess quickly.

Go solved all these issues. Makes deployment so much easier. In all the projects I've done I estimate that more than half have zero dependencies outside of the standard library. And unlike Python, you don't have to "install" Go or it's libraries on the server you plan to run your program on. Fully static self contained executable binary with zero external files needed is amazing, and the fact that you can cross compile for any OS+ CPU arch out of the box on any supported system is a miracle.

The issues described by the original post seem like small potatoes compared to the benefits I've gotten by shifting from Python over to Go


Restrict data collection? It would kill all startups and firmly entrance a terrible provider monopoly who can comply.

Have the government own data collection? Yeah, I don't even know where to start with all the problems this would cause.

Ignore it and let companies keep abusing customers? Nope.

Stop letting class-action lawsuits slap the company's wrists and then give $0.16 payouts to everyone?

What exactly do we do without killing innovation, building moats around incumbents, giving all the power to politicians who will just do what the lobbyists ask (statistically), or accepting things as is?


Why do the start ups need to collect data like this?


I work for a medical technology company. How do you propose we service our customers without their medical data?


Does it need to be hosted on your servers? Could you provide something to the customers where they host the data or their local doctors office does it?

Can you delete it after the shortest possible period of using it, potentially? Do you keep data after someone stops being a customer or stops actively using the tech?


Record retention is covered by a complex set of overlapping regulations and contracts. They are dependent on much more than date of service. M&A activity, interstate operations, subsequent changes in patient mental status, etc can all cause the horizon to change well after the last encounter.

As all the comments in this thread suggest the cost of having an extra record , even an extra breached record is low. The cost of failing to produce a required medical record is high.

Put this together with dropping storage prices, razor then margins, and IT estates made out of thousands of specialized point solutions cobbled together with every integration pattern ever invented and you get a de facto retention of infinity paired with a de jure obligation of could-be-anything-tomorrow.


Professionally, my company builds one of the largest EHR-integrated web apps in the US

Ask me how many medical practices connect every day via IE on Windows 8.


I'm not trying to be rude, but it's clear you have idea what you're talking about. The medical world is heavily regulated and there are things we must do and thing's we can't do. If you go to your doctor with a problem, would you want your doctor to have the least amount of information possible or your entire medical history? The average person has no business hosting their sensitive data like banking and medical information. If you think fraud and hacks are bad now, what do you think would happen if your parents were forced to store their own data? Or if a doctor who can barely use an EMR was responsible for the security of your medical data? I would learn a lot more about the area before making suggestions.


Having seen this world up close, the absolute last place you ever want your medical data to be is on the Windows Server in the closet of your local doctors office. The public cloud account of a Silicon Valley type company that hires reasonably competent people is Fort Knox by comparison.


Yeah but the a local private practice is a fairly small target. No one is going to break into my house just to steal my medical records, for example.

This could also be drastically improved by the government spearheading a FOSS project for medical data management (archival, backup, etc). A single offering from the US federal government would have a massive return on investment in terms of impact per dollar spent.

Maybe the DOGE staff could finally be put to good use.


You seem to be confused about how this works. Attackers use automated scripts to locate vulnerable systems. Small local private practices are always targeted because everything is targeted. The notion of the US federal government offering an online data backup service is ludicrous, and wouldn't have even prevented the breach in this article.


> Attackers use automated scripts to locate vulnerable systems.

I'm aware. I thought we were talking about something a bit higher effort than that.

> online data backup service

That isn't what I said. I suggested federally backed FOSS tooling for the specific usecase. If nothing else that would ensure that low effort scanners came up empty by providing purpose built software hardened against the expected attack vectors. Since it seems we're worrying about the potential for broader system misconfiguration they could even provide a blessed OS image.

The breach in the article has nothing to do with what we're talking about. That was a case of shadow IT messing up. There's not much you can do about that.


I just registered CVEs in several platforms in a related industry, the founders of whom likely all asked themselves a similar question. And yet, it's the wrong question. The right one is, "Does this company need to exist?" I don't know you or your company. Maybe it's great. But many startups are born thinking there's a technological answer to a question that requires a social/political one. And instead of fixing the problem, the same founders use their newfound wealth to lobby to entrench the problem that justifies their company's existence, rather than resolves the need for it to exist in the first place. "How do you propose we service our customers without their medical data?" Fix your fucked healthcare system.


Ask for it?


I hope you're joking...

Otherwise it would suggest you think the problem is they didn't ask? When was the last time you saw a customer read a terms of service? Or better yet reject a product because of said terms once they hit that part of the customer journey?

The issue isn't about asking it's that for take your pick of reasons no one ever says no. The asking is thus pro forma and irrelevant.


We apply crippling fines on companies and executives that let these breaches happen.

Yes, some breaches (actual hack attacks) are unavoidable, so you don't slap a fine on every breach. But the vast majority of "breaches" are pure negligence.


> Restrict data collection? It would kill all startups and firmly entrance a terrible provider monopoly who can comply.

That's a terrible argument for allowing our data to be sprayed everywhere. How about regulations with teeth that prohibit "dragons" from hoarding data about us? I do not care what the impact is on the "economy". That ship sailed with the current government in the US.

Or, both more and less likely, cut us in on the revenue. That will at least help some of the time we have to waste doing a bunch of work every time some company "loses" our data.

I'm tired of subsidizing the wealth and capital class. Pay us for holding our data or make our data toxic.

Obviously my health provider and my bank need my data. But no one else does. And if my bank or health provider need to share my data with a third party it should be anonymized and tokenized.

None of this is hard, we simply lack will (and most consumers, like voters are pretty ignorant).


The solution is to anonymize all data at the source, i.e. use a unique randomized ID as the key instead of someone's name/SSN. Then the medical provider would store the UID->name mapping in a separate, easily secured (and ideally air-gapped) system, for the few times it was necessary to use.


...use a unique randomized ID as the key...

33 bits is all that are required to individually identify any person on Earth.

If you'd like to extend that to the 420 billion or so who've lived since 1800, that extends to 39 bits, still a trivially small amount.

Every bit[1] of leaked data bisects that set in half, and simply anonymising IDs does virtually nothing of itself to obscure identity. Such critical medical and billing data as date of birth and postal code are themselves sufficient to narrow things down remarkably, let alone a specific set of diagnoses, procedures, providers, and medications. Much as browser fingerprints are often unique or nearly so without any universal identifier so are medical histories.

I'm personally aware of diagnostic and procedure codes being used to identify "anonymised" patients across multiple datasets dating to the early 1990s, and of research into de-anonymisation in Australia as of the mid-to-late 1990s. Australia publishes anonymisation and privacy guidelines, e.g.:

"Data De‑identification in Australia: Essential Compliance Guide"

<https://sprintlaw.com.au/articles/data-de-identification-in-...>

"De-identification and the Privacy Act" (2018)

<https://www.oaic.gov.au/privacy/privacy-guidance-for-organis...>

It's not merely sufficient to substitute an alternative primary key, but also to fuzz data, including birthdates, addresses, diagnostic and procedure codes, treatment dates, etc., etc., all of which both reduces clinical value of the data and is difficult to do sufficiently.

________________________________

Notes:

1. In the "binary digit" sense, not in the colloquial "small increment" sense.


What a silly idea. That would completely prevent federally mandated interoperability APIs from working. While privacy breaches are obviously a problem, most consumers don't want care quality and coordination harmed just for the sake of a minor security improvement.

https://www.cms.gov/priorities/burden-reduction/overview/int...


[deleted]


Honestly I'd take the 16 cents. Usually its a discount voucher on a product you'd never buy.

Or if it's a freebie then it's hidden behind a plain text link 3 levels deep on their website.


100% state bot. I wouldn't even think it was just France, other state actors would love to see GrapheneOS go down as well. How dare citizens have technology we can't access.


I know they say that your programming language isn't the bottleneck, but I remember sitting there being frustrated as a young dev that I couldn't parse faster in the languages I was using when I learned about Go.

It took a few more years before I actually got around to learning it and I have to say I've never picked up a language so quickly. (Which makes sense, it's got the smallest language spec of any of them)

I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.


The nice thing about Go is that you can learn "all of it" in a reasonable amount of time: gotchas, concurrency stuff, everything. There is something very comforting about knowing the entire spec of a language.

I'm convinced no more than a handful of humans understand all of C# or C++, and inevitably you'll come across some obscure thing and have to context switch out of reading code to learn whatever the fuck a "partial method" or "generic delegate" means, and then keep reading that codebase if you still have momentum left.


> The nice thing about Go is that you can learn "all of it" in a reasonable amount of time

This always feels like one of those “taste” things that some programmers tend to like on a personal level but has almost no evidence that it leads to more real-world success vs any other language.

Like, people get real work done every day at scale with C# and C++. And Java, and Ruby, and Rust, and JavaScript. And every other language that programmers castigate as being huge and bloated.

I’m not saying it’s wrong to have a preference for smaller languages, I just haven’t seen anything in my career to indicate that smaller languages outperform when it comes to faster delivery or less bugs.

As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.


Just an anecdote and not necessarily generalizable, but I can at least give one example:

I'm in academia doing ML research where, for all intents and purposes, we work exclusively in Python. We had a massive CSV dataset which required sorting, filtering, and other data transformations. Without getting into details, we had to rerun the entire process when new data came in roughly every week. Even using every trick to speed up the Python code, it took around 3 days.

I got so annoyed by it that I decided to rewrite it in a compiled language. Since it had been a few years since I've written any C/C++, which was only for a single class in undergrad and I remember very little of, I decided to give Go a try.

I was able to learn enough of the language and write up a simple program to do the data processing in less than a few hours, which reduced the time it took from 3+ days to less than 2 hours.

I unfortunately haven't had a chance or a need to write any more Go since then. I'm sure other compiled, GC languages (e.g., Nim) would've been just as productive or performant, but I know that C/C++ would've taken me much longer to figure out and would've been much harder to read/understand for the others that work with me who pretty much only know Python. I'm fairly certain that if any of them needed to add to the program, they'd be able to do so without wasting more than a day to do so.


Did you try scipy/numpy or any python library with a compiled implementation before picking up Go?


Of course, but the dataset was mostly strings that needed to be cross-referenced with GIS data. Tried every library under the sun. The greatest speed up I got was using polars to process the mostly-string CSVs, but didn't help much. With that said, I think polars was also just released when we were working with that dataset and I'm sure there's been a lot of performance improvements since then.


These only help if you can move the hot loop into some compiled code in those libraries. There's a lot of cases where this isn't possible and at that point there's just no way to make python fast (basically, as soon as you have a for loop in python that runs over every point in your dataset, you've lost).


> I’m not saying it’s wrong to have a preference for smaller languages, I just haven’t seen anything in my career to indicate that smaller languages outperform when it comes to faster delivery or less bugs.

I can imagine myself grappling with a language feature unobvious to me and eventually getting distracted. Sure, there is a lot of things unobvious to me but Go is not one of them and it influenced the whole environment.

Or, when choosing the right language feature, I could end up with weighing up excessively many choices and still failing to get it right, from the language correctness perspective (to make code scalable, look nice, uniform, play well with other features, etc).

An example not related to Go: bash and rc [1]. Understanding 16 pages of Duff’s rc manual was enough for me to start writing scripts faster than I did in bash. It did push me to ease my concerns about program correctness, though, which I welcomed. The whole process became more enjoyable without bashisms getting in the way.

Maybe it’s hard to measure the exact benefit but it should exist.

1: https://9p.io/sys/doc/rc.html


I think Go is a great language when hiring. If you're hiring for C++, you'll be wary of someone who only knows JavaScript as they have a steep learning curve ahead. But learning Go is very quick when you already know another programming language.


I agree that empirical data in programming is difficult, but i’ve used many of those languages personally, so I can say for myself at least that I’m far more productive in Go than any of those other languages.

> As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.

I think those problems are related. The more features you have, the more difficult it becomes to avoid strange, surprising interactions. It’s like a pharmacist working with a patient who is taking a whole cocktail of prescriptions; it becomes a combinatorial problem to avoid harmful reactions.


> Like, people get real work done every day at scale with C# and C++.

That would be me. I _like_ C#, but there are elements to that language that I _never_ work with on a daily basis, it's just way too large of a language.

Go is refreshing in it's simplicity.


I've been writing go professionally for about ten years, and with go I regularly find myself saying "this is pretty boring", followed by "but that's a good thing" because I'm pretty sure that I won't do anything in a go program that would cause the other team members much trouble if I were to get run over by a bus or die of boredom.

In contrast writing C++ feels like solving an endless series of puzzles, and there is a constant temptation to do Something Really Clever.


> I'm pretty sure that I won't do anything in a go program that would cause the other team members much trouble

Alas there are plenty of people who do[0] - for some reason Go takes architecture astronaut brain and wacks it up to 11 and god help you if you have one or more of those on your team.

[0] flashbacks to the interface calling an interface calling an interface calling an interface I dealt with last year - NONE OF WHICH WERE NEEDED because it was a bloody hardcoded value in the end.


My cardinal rule in Go is just don't use interfaces unless you really, really need to and there's no other way. If you're using interfaces you're probably up to no good and writing Java-ish code in Go. (usually the right reason to use interfaces is exportability)

Yes, not even for testing. Use monkey-patching instead.


> My cardinal rule in Go is just don't use interfaces unless you really, really need to and there's no other way.

They do make some sense for swappable doodahs - like buffers / strings / filehandles you can write to - but those tend to be in the lower levels (libraries) rather than application code.


Go is okay. I don't hate it but I certainly don't love it.

The packaging story is better than c++ or python but that's not saying much, the way it handles private repos is a colossal pain, and the fact that originally you had to have everything under one particular blessed directory and modules were an afterthought sure speaks volumes about the critical thinking (or lack thereof) that went into the design.

Also I miss being able to use exceptions.


When Go was new, having better package management than Python and C++ was saying a lot. I’m sure Go wasn’t the first, but there weren’t many mainstream languages that didn’t make you learn some imperative DSL just to add dependencies.


Sure, but all those languages didn't have the psychotic design that mandated all your code lives under $GOPATH for the first several versions.

I'm not saying it's awful, it's just a pretty mid language, is all.


I picked up Go precisely in 2012 because $GOPATH (as bad as it was) was infinitely better than CMake, Gradle, Autotools, pip, etc. It was dead simple to do basic dependency management and get an executable binary out. In any other mainstream language on offer at the time, you had to learn an entire programming language just to script your meta build system before you could even begin writing code, and that build system programming language was often more complex than Go.


That was a Plan9ism, I think. Java had something like it with CLASSPATH too, didn't it?


I never understood the GOPATH freakout, coming from Python it seemed really natural- it's a mandatory virtualenv.


The fact that virtualenv exists at all should be viewed by the python community as a source of profound shame.

The idea that it's natural and accepted that we just have python v3.11, 3.12, 3.13 etc all coexisting, each with their own incompatible package ecosystems, and in use on an ad-hoc, per-directory basis just seems fundamentally insane to me.


The language has changed a lot since then. Give it a fresh look sometime.


It's still pretty mid and still missing basic things like sets.

But mid is not all that bad and Go has a compelling developer experience that's hard to beat. They just made some unfortunate choices at the beginning that will always hold it back.


The tradeoff with that language simplicity is that there's a whole lot of gotchas that come with Go. It makes things look simpler than they actually are.


> I'm convinced no more than a handful of humans understand all of C# or C++

How would the proportion of humans that understand all of Rust compare?


For Rust vs C++, I'd say it'll be much easier to have a complete understanding of Rust. C++ is an immensely complex language, with a lot of feature interactions.

C# is actually fairly complex. I'm not sure if it's quite at the same level as Rust, but I wouldn't say it's that far behind in difficulty for complete understanding.


Rust managed to learn a lot from C++ and other languages' mistakes.

So while it has quite a bit of essential complexity (inherent in the design space it operates: zero overhead low-level language with memory safety), I believe it fares overall better.

Like no matter the design, a language wouldn't need 10 different kinds of initializer syntaxes, yet C++ has at least that many.


I'm pretty convinced that nobody has a full picture of Rust in their head. There isn't even a spec to read.


There is, in fact, a spec to read[1], as of earlier this year.

[1] https://rustfoundation.org/media/ferrous-systems-donates-fer...


Rust is very advanced, with things like higher-ranked trait bounds (https://doc.rust-lang.org/nomicon/hrtb.html) and generic associated types (https://www.ncameron.org/rfcs/1598) that are difficult because they are essential complexity not accidental complexity.


For Rust I'd expect the implementation to be the real beast, versus the language itself. But not sure how it compares to C++ implementation complexity.


Rust isn’t that complicated if you have some background in non GC languages.


Parent say _all_ of it, not a subset for everyday use.


There's a different question too, that I think is more important (for any language): how much of the language do you need to know in order to use it effectively. As another poster mentioned, the issue with C++ might not be the breath of features, but rather how they interact in non-obvious ways.


This is also what I like about JS, except it's even easier than Go. Meanwhile Python has a surprising number of random features.


ECMAScript is an order of magnitude more complicated than Go by virtually every measure - length of language spec, ease of parsing, number of context-sensitive keywords and operators, etc.


Yeah I’m pretty sure people who say JS is easy don’t know about its Prototype based OOP


You don't have to know about it, but if you do, it's actually simpler than how other languages do OOP.


Not convinced. Especially with property flags.


strict mode makes it okay


Sorry, hard disagree. Try to understand what `this` means in JS in its entirety and you'll agree it's by no stretch of the imagination a simple language. It's more mind-bending and hence _The Good Parts_.



I think JS is notoriously complicated: the phrase “the good parts” has broad recognition among programmers.


Just so we're on the same page, this is the current JS spec:

https://262.ecma-international.org/16.0/index.html

I don't agree. (And frankly don't like using JS without at least TypeScript.)


While I might not think that JS is a good language (for some definition of a good language), to me the provided spec does feel pretty small, considering that it's a language that has to be specified to the dot and that the spec contains the standard library as well.

It has some strange or weirdly specified features (ASI? HTML-like Comments?) and unusual features (prototype-based inheritance? a dynamically-bounded this?), but IMO it's a small language.


Shrugging it off as just being large because it contains the "standard library" ignores that many JS language features necessarily use native objects like symbols or promises, which can't be entirely implemented in just JavaScript alone, so they are intrinsic rather than being standard library components, akin to Go builtins rather than the standard library. In fact, in actual environments, the browser and/or Node.JS provide the actual standard library, including things like fetch, sockets, compression codecs, etc. Even ignoring almost all of those bits though, the spec is absolutely enormous, because JavaScript has:

- Regular expressions - not just in the "standard library" but in the syntax.

- An entire module system with granular imports and exports

- Three different ways to declare variables, two of which create temporal dead zones

- Classes with inheritance, including private properties

- Dynamic properties (getters and setters)

- Exception handling

- Two different types of closures/first class functions, with different binding rules

- Async/await

- Variable length "bigint" integers

- Template strings

- Tagged template literals

- Sparse arrays

- for in/for of/iterators

- for await/async iterators

- The with statement

- Runtime reflection

- Labeled statements

- A lot of operators, including bitwise operators and two sets of equality operators with different semantics

- Runtime code evaluation with eval/Function constructor

And honestly it's only scratching the surface, especially of modern ECMAScript.

A language spec is necessarily long. The JS language spec, though, is so catastrophically long that it is a bit hard to load on a low end machine or a mobile web browser. It's on another planet.


Yeah, a lot of the quirks come from it being small


The Javascript world hides its complexity outside the core language, though. JS itself isn't so weird (though as always see the "Wat?" video), but the incantations required to type and read the actual code are pretty wild.

By the time you understand all of typescript, your templating environment of choice, and especially the increasingly arcane build complexity of the npm world, you've put in hours comparable to what you'd have spent learning C# or Java for sure (probably more). Still easier than C++ or Rust though.


…do you know you can just write JavaScript and run it in the browser? You don’t need TypeScript, NPM or build tools.


You do if you want more than one file, or if you want to use features that a user’s target browser may not support.


> You do if you want more than one file

Modules were added in, like, 2016.


nodejs and npm are easy for beginners, especially compared to the Python packaging situation


I learned Go this year, and this assertion just... isn't true? There are a bunch of subtleties and footguns, especially with concurrency.

C++ is a basket case, it's not really a fair comparison.


As they (I) say, writing a concurrent Go program is easy, writing a correct one is a different story :)


I’ve been using Python since 2008, and I don’t feel like I understand very much of it at all, but after just a couple of years of using Go in a hobby capacity I felt I knew it very well.


Well that's good, since Go was specifically designed for juniors.

From Rob Pike himself: "It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."

However, the main design goal was to reduce build times at Google. This is why unused dependencies are a compile time error.

https://go.dev/talks/2012/splash.article#TOC_6.


> This is why unused dependencies are a compile time error.

https://go.dev/doc/faq?utm_source=chatgpt.com#unused_variabl...

> There are two reasons for having no warnings. First, if it’s worth complaining about, it’s worth fixing in the code. (Conversely, if it’s not worth fixing, it’s not worth mentioning.) Second, having the compiler generate warnings encourages the implementation to warn about weak cases that can make compilation noisy, masking real errors that should be fixed.

I believe this was a mistake (one that sadly Zig also follows). In practice there are too many things that wouldn't make sense being compiler errors, so you need to run a linter. When you need to comment out or remove some code temporarily, it won't even build, and then you have to remove a chain of unused vars/imports until it let's you, it's just annoying.

Meanwhile, unlinted go programs are full of little bugs, e.g. unchecked errors or bugs in err-var misuse. If there only were warnings...


Yeah, but just going back to warnings would be a regression.

I believe the correct approach is to offer two build modes: release and debug.

Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.

Release is the default, is strict and runs fast.

That way you can mess about in development all you want, but need to clean up before releasing. It would also take the pressure off having release builds compile fast, allowing for more optimisation passes.


That doesn't make any sense, you'd still need to run the linters on release. Why bail out on "unused var" and not on actually harmful stuff.


> Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.

At least in the golang / unused-vars at Google case, allowing unused vars is explicitly one of the things that makes compilation slower.

In that case it's not "faster compilation as in less optimization". It's "faster compilation as in don't have to chase down and potentially compile more parts of a 5,000,000,000 line codebase because an unused var isn't bringing in a dependency that gets immediately dropped on the floor".

So it's kinda an orthogonal concern.


Accidentally pulling in a unused dependency during development is, if not a purely hypothetical scenario, at least an extreme edge case. During debug, most of the times you already built those 5000000000 lines while trying to reproduce a problem on the original version of the code. Since that didn’t help, you now want to try commenting out one function call. Beep! Unused var.


Right, I meant that the binary should run slowly on purpose, so that people don't end up defaulting to just using the debug build. A nice way of doing so without just putting `sleep()`s everywhere would be to enable extra safety checks.


I feel like people always take the designed for juniors thing the wrong way by implying that beneficial (to general software engineering) features or ideas were left out as a trade off to make the language easier to learn at the cost of what the language could be to a senior. I don't think the go designers see these as opposing trade offs.

Whats good for the junior can be good for the senior. I think PL values have leaned a little too hard towards valuing complexity and abstract 'purity' while go was a break away from that that has proved successful but controversial.


> This is why unused dependencies are a compile time error.

I think my favourite bit of Go opinionatedness is the code formatting.

K&R or GTFO.

Oh you don't like your opening bracket on the same line? Tough shit, syntax error.


But it also has a advantages that you can literally read a lot of code from other devs without twisting your eyes sideways because everybody has their own style.


Exactly.

"This is Go. You write it this way. Not that way. Write it this way and everyone can understand it."

I wish I was better at writing Go, because I'm in the middle of writing a massive and complex project in Go with a lot of difficult network stuff. But you know what they say, if you want to eat a whole cow, you just have to pick and end and start eating.


Yep ... its like people never read some of the main dev's motivations. The ability for people to be able to read each others code was a main point.

I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs. When i started Go, it was always Rust devs in /r/programming pushing their agenda as Rust being the next best thing, the whole "rewrite everything in Rust"...

About 10 years ago, learned Rust and these days, i can barely read the code anymore with the tons of new syntax that got added. Its like they forgot the lessons from C++...


> I don't know but for me a lot of attacks on Go, often come from non-go developers, VERY often Rust devs.

I see it as a bit like Python and Perl. I used to use both but ended up mostly using Python. They're different languages, for sure, but they work in similar ways and have similar goals. One isn't "better" than the other. You hardly ever see Perl now, I guess in the same way there's a lot of technology that used to be everywhere but is now mostly gone.

I wanted to pick a not-C language to write a thing to deal with a complex but well-documented protocol (GD92, and we'll see how many people here know what that is) that only has proprietary software implementing it, and I asked if Go or Rust would be a good fit. Someone told me that Go is great for concurrent programming particularly to do with networks, and Rust is also great for concurrent processing and takes type safety very seriously. Well then, I guess I want to pick apart network packets where I need to play fast and loose with ints and strings a bit, so maybe I'll use Go and tread carefully. A year later, I have a functional prototype, maybe close to MVP, written in Go (and a bit of Lua, because why not).

The Go folks seem to be a lot more fun to be around than the Rust folks.

But at least they're nothing like the Ruby on Rails folks.


Doesn't Google use mostly C++?


Just because it was a design goal doesn't mean it succeeded ;)

From Russ Cox this time: "Q. What language do you think Go is trying to displace? ... One of the surprises for me has been the variety of languages that new Go programmers used to use. When we launched, we were trying to explain Go to C++ programmers, but many of the programmers Go has attracted have come from more dynamic languages like Python or Ruby."

https://research.swtch.com/gotour


It's interesting that I've also heard the same from people involved in Rust. Expecting more interest from C++ programmers and being surprised by the numbers of Ruby/Python programmers interested.

I wonder if it's that Ruby/Python programmers were interested in using these kinds of languages but were being pushed away by C/C++.


The people writing C++ either don't need much convincing to switch because they see the value or are unlikely to give it up anytime soon because they don't see anything Rust does as being useful to them, very little middle ground. People from higher level languages on the other hand see in Rust a way to break into a space that they would otherwise not attempt because it would take too long a time to reach proficiency. The hard part of Rust is trying to simultaneously have hard to misuse APIs and no additional performance penalty (however small). If you relax either of those goals (is it really a problem if you call that method through a v-table?), then Rust becomes much easier to write. I think GC Rust would already be a nice language to use that I'd love, like a less convoluted Scala, it just wouldn't have fit in a free square that ensured a niche for it to exist and grow, and would likely have died in the vine.


I think on average C++ programmers are more interested in Rust than in Go. But C programmers are on average probably not interested in either. I do agree that the accessible nature of the two languages (or at least perception thereof) compared to C and C++ is probably why there's more people coming from higher-level languages interested in the benefits of static typing and better performance.


It really depends on product area.


No.


I write a lot of Go, a bit of Rust, and Zig is slowly creeping in.

To add to the above comment, a lot of what Go does encourages readability... Yes it feels pedantic at moments (error handling), but those cultural, and stylistic elements that seem painful to write make reading better.

Portable binaries are a blessing, fast compile times, and the choices made around 3rd party libraries and vendoring are all just icing on the cake.

That 80 percent feeling is more than just the language, as written, its all the things that come along with it...


Error handling is objectively terrible in Go and the explicitness of the always repeating pattern just makes humans pay less attention to potentially problematic lines and otherwise increases the noise to signal ratio.


Error handling isn't even a pain to write any more with AI autocomplete which gets it right 95%+ of the time in my experience.


You're not wrong but... there is a large contingent of the Go community that has a rather strong reaction to AI/ML/LLM generated code at any level.

I keep using the analogy, that the tools are just nail guns for office workers but some people remain sticks in the mud.


Nail guns are great because they're instant and consistent. You point, you shoot, and you've unimpeachably bonded two bits of wood.

For non-trivial tasks, AI is neither of those. Anything you do with AI needs to be carefully reviewed to correct hallucinations and incorporate it into your mental model of the codebase. You point, you shoot, and that's just the first 10-20% of the effort you need to move past this piece of code. Some people like this tradeoff, and fair enough, but that's nothing like a nailgun.

For trivial tasks, AI is barely worth the effort of prompting. If I really hated typing `if err != nil { return nil, fmt.Errorf("doing x: %w", err) }` so much, I'd make it an editor snippet or macro.


> Nail guns are great because they're instant and consistent. You point, you shoot, and you've unimpeachably bonded two bits of wood.

You missed it.

If I give a random person off the street a nail gun, circular saw and a stack of wood are they going to do a better job building something than a carpenter with a hammer and hand saw?

> Anything you do with AI needs to be carefully reviewed

Yes, and so does a JR engineer, so do your peers, so do you. Are you not doing code reviews?


> If I give a random person off the street a nail gun, circular saw and a stack of wood

If this is meant to be an analogy for AI, it doesn't make sense. We've seen what happens when random people off the street try to vibe-code applications. They consistently get hacked.

> Yes, and so does a JR engineer

Any junior dev who consistently wrote code like an AI model and did not improve with feedback would get fired.


You are responsible for the AI code you check in. It's your reputation on the line. If people felt the need to assume that much responsibility for all code they review, they'd insist on writing it themselves instead.


> there is a large contingent of the Go community that has a rather strong reaction to AI/ML/LLM generated code at any level.

This Go community that you speak of isn't bothered by writing the boilerplate themselves in the first place, though. For everyone else the LLMs provide.


> Which makes sense, it's got the smallest language spec of any of them

I think go is fairly small, too, but “size of spec” is not always a good measure for that. Some specs are very tight, others fairly loose, and tightness makes specs larger (example: Swift’s language reference doesn’t even claim to define the full language. https://docs.swift.org/swift-book/documentation/the-swift-pr...: “The grammar described here is intended to help you understand the language in more detail, rather than to allow you to directly implement a parser or compiler.”)

(Also, browsing golang’s spec, I think I spotted an error in https://go.dev/ref/spec#Integer_literals. The grammar says:

  decimal_lit    = "0" | ( "1" … "9" ) [ [ "_" ] decimal_digits ] . 
Given that, how can 0600 and 0_600 be valid integer literals in the examples?)


You're looking at the wrong production. They are octal literals:

    octal_lit      = "0" [ "o" | "O" ] [ "_" ] octal_digits .


Thanks! Never considered that a 21st century language designed for “power of two bits per word” hardware would keep that feature from the 1970s, so I never looked at that production.

Are there other modern languages that still have that?


0600 and 0_600 are octal literals:

    octal_lit      = "0" [ "o" | "O" ] [ "_" ] octal_digits .


Never mind, I was wrong. Here’s a playground showing how go parses each one: https://go.dev/play/p/hyWPkL_9C5W


> Octals must start with zero and then o/O literals.

No, the o/O is optional (hence in square brackets), only the leading zero is required. All of these are valid octal literals in Go:

0600 (zero six zero zero)

0_600 (zero underscore six zero zero)

0o600 (zero lower-case-letter-o six zero zero)

0O600 (zero upper-case-letter-o six zero zero)


My bad! I was wrong; added a playground demonstration the parsing behavior above.


My original comment was incorrect. These are being parsed as octals, not decimals: https://go.dev/play/p/hyWPkL_9C5W


I don't understand the framing you have here, of Rust being an asymptote of language capability. It isn't. It's its own set of tradeoffs. In 2025, it would not make much sense to write a browser in Go. But there are a lot of network services it doesn't really make sense to write in Rust: you give up a lot (colored functions, the borrow checker) to avoid GC and goroutines.

Rust is great. One of the stupidest things in modern programming practice is the slapfight between these two language communities.


Unfortunately, it's the remaining 20% of Rust features that provide 80% of its usefulness.


Language can be bottleneck if there's something huge missing from it that you need, like how many of them didn't have first class support for cooperative multitasking, or maybe you need it to be compiled, or not compiled, or GC vs no GC. Go started out with solid greenthreading, while afaik no major lang/runtime had something comparable at the time (Java now does supposedly).

The thing people tend to overvalue is the little syntax differences, like how Scala wanted to be a nicer Java, or even ObjC vs Swift before the latter got async/await.


I'll be the one to nickpick, but Scala never intended to be a nicer Java. It was and still is an academic exercise in compiler and language theory. Also, judging by Kotlin's decent strides, "little Syntex differences" get you a long way on a competent VM/Runtime/stdlib.


Kotlin's important feature is the cooperative multitasking. Java code has been mangled all these years to work around not having that. I don't think many would justify the switch to Kotlin otherwise.


It's probably an important feature now, but it's a recent one in this context.


Oh true, I thought it was older


Similar story for me. I was looking for a language that just got out of the way. That didn’t require me to learn a full imparable DSL just to add a few dependencies and which could easily produce some artifact that I could share around without needing to make sure the target machine had all the right dependencies installed.


It really is a lovely language and ecosystem of tools, I think it does show its limitations fairly quickly when you want to build something a bit complex though. Really wish they would have added sumtypes


Go is getting more complex over time though. E.g. generics.


>> I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.

By 20% of the effort, do you mean learning curve or productivity?


Funny thing is that also makes it easier on LLM / AI... Tried a project a while ago both creating the same thing in Rust and Go. Go's worked from the start, while Rust's version needed a lot of LLM interventions and fixes to get it to compile.

We shall not talk about compile time / resource usage differences ;)

I mean, Rust is nice, but compared to when i learned it like 10 years ago, it really looks a lot more these days, like it took too much of a que from C++.

While Go syntax is still the same as it was 10 years ago with barely anything new. What may anger people but even so...

The only thing i love to see is reduce executable sizes because pushing large executables on a dinky upload line, to remove testing is not fun.


> I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.

I don't see it. Can you say what 80% you feel like you're getting?

The type system doesn't feel anything alike, I guess the syntax is alike in the sense that Go is a semi-colon language and Rust though actually basically an ML deliberately dresses as a semi-colon language but otherwise not really. They're both relatively modern, so you get decent tooling out of the box.

But this feels a bit like if somebody told me that this new pizza restaurant does a cheese pizza that's 80% similar to the Duck Ho Fun from that little place near the extremely tacky student bar. Duck Ho Fun doesn't have nothing in common with cheese pizza, they're both best (in my opinion) if cooked very quickly with high heat - but there's not a lot of commonality.


> I don't see it. Can you say what 80% you feel like you're getting?

I read it as “80% of the way to Rust levels of reliability and performance.” That doesn’t mean that the type system or syntax is at all similar, but that you get some of the same benefits.

I might say that, “C gets you 80% of the way to assembly with 20% of the effort.” From context, you could make a reasonable guess that I’m talking about performance.


Yes, for me I've always pushed the limits of what kinds of memory and cpu usage I can get out of languages. NLP, text conversion, video encoding, image rendering, etc...

Rust beats Go in performance.. but nothing like how far behind Java, C#, or scripting languages (python, ruby, typescript, etc..) are from all the work I've done with them. I get most of the performance of Rust with very little effort a fully contained stdlib/test suite/package manger/formatter/etc.. with Go.


Rust is the most defect free language I have ever had the pleasure of working with. It's a language where you can almost be certain that if it compiles and if you wrote tests, you'll have no runtime bugs.

I can only think of two production bugs I've written in Rust this year. Minor bugs. And I write a lot of Rust.

The language has very intentional design around error handling: Result<T,E>, Option<T>, match, if let, functional predicates, mapping, `?`, etc.

Go, on the other hand, has nil and extremely exhausting boilerplate error checking.

Honestly, Go has been one of my worst languages outside of Python, Ruby, and JavaScript for error introduction. It's a total pain in the ass to handle errors and exceptional behavior. And this leads to making mistakes and stupid gotchas.

I'm so glad newer languages are picking up on and copying Rust's design choices from day one. It's a godsend to be done with null and exceptions.

I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile like Go, but designed in a safe way like Rust. I need it for my smaller tasks and scripting. Swift is kind of nice, but it's too Apple centric and hard to use outside of Apple platforms.

I'm honestly totally content to keep using Rust in a wife variety of problem domains. It's an S-tier language.


> I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile

It could as well be Haskell :) Only partly a joke: https://zignar.net/2021/07/09/why-haskell-became-my-favorite...


Borgo could be that language for you. It compiles down to Go, and uses constructs like Option<T> instead of nil, Result<T,E> instead of multiple return values, etc. https://github.com/borgo-lang/borgo


> I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile like Go, but designed in a safe way like Rust

OCaml is pretty much that, with a very direct relationship with Rust, so it will even feel familiar.


I agree with a lot of what you said. I'm hoping Rust will warm on me as I improve in it. I hate nil/null.

> Go... extremely exhausting boilerplate error checking

This actually isn't correct. That's because Go is the only language that makes you think about errors at every step. If you just ignored them and passed them up like exceptions or maybe you're basically just exchanging handling errors for assuming the whole thing pass/fail.

If you you write actual error checking like Go in Rust (or Java, or any other language) then Go is often less noisy.

It's just two very different approaches to error handling that the dev community is split on. Here's a pretty good explanation from a rust dev: https://www.youtube.com/watch?v=YZhwOWvoR3I


It’s very common in Go to just pass the error on since there’s no way to handle it in that layer.

Rust forces you to think about errors exactly as much, but in the common case of passing it on it’s more ergonomic.


just be careful with unwrap :)


Go is in the same performance profile as Java and C#. There are tons of benchmarks that support this.


1) for one-off scripts and 2) If you ignore memory.

You can make about anything faster if you provide more memory to store data in more optimized formats. That doesn't make them faster.

Part of the problem is that Java in the real world requires an unreasonable number of classes and 3rd party libraries. Even for basic stuff like JSON marshaling. The Java stdlib is just not very useful.

Between these two points, all my production Java systems easily use 8x more memory and still barely match the performance of my Go systems.


I genuinely can’t think of anything the Java standard library is missing, apart from a json parser which is being added.

It’s your preference to prefer one over the other, I prefer Java’s standard library because atleast it has a generic Set data structure in it and C#’s standard library does have a JSON parser.

I don’t think discussions about what is in the standard library really refutes anything about Go being within the same performance profile though.


Memory is the most common tradeoff engineers make for better performance. You can trivially do so yourself with java, feel free to cut down the heap size and Java's GC will happily chug along 10-100 times as often without a second thought, they are beasts. The important metric is that Java's GC will be able to keep up with most workloads, and it won't needlessly block user threads from doing their work. Also, not running the GC as often makes Java use surprisingly small amounts of energy.

As for the stdlib, Go's is certainly impressive but come on, I wouldn't even say that in general case Java's standard library is smaller. It just so happens that Go was developed with the web in mind almost exclusively, while Java has a wider scope. Nonetheless, the Java standard library is certainly among the bests in richness.


ZGC? It should be on par or better than Go.


Java’s collectors vastly outperform Go’s. Look at the Debian binary tree benchmarks [0]. Go just uses less memory because it’s AOT compiled from the start and Java’s strategy up until recently is to never return memory to the OS. Java programs are typically on servers where it’s the only application running.

[0] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...



IIRC the native image GC is still the serial GC by default. Which would probably perform the worst out of all the available GCs.

I know on HotSpot they’re planning to make G1 the default for every situation. Even where it would previously choose the serial GC.


I guess the 80% would be a reasonably performant compiled binary with easily managed dependencies? And the extra 20% would be the additional performance and peace of mind provided by the strictness of the Rust compiler.


Single binary deployment was a big deal when Go was young; that might be worth a few percent. Also: automatically avoiding entire categories of potential vulnerabilities due to language-level design choices and features. Not compile times though ;)


Wild guess but, with the current JS/python dominance, maybe it’s just the benefits of a (modern) compiled language.


What is the alternative though? In strongly typed languages like Go, Rust, etc.. you must define the contract. So you either focus on what you need, or you just make a kitchen-sink interface.

I don't even want to think about the global or runtime rewriting that is possible (common) in Java and JavaScript as a reasonable solution to this DI problem.


I'm still fiddling with this so I haven't seen it at scale yet, but in some code I'm writing now, I have a centralized repository for services that register themselves. There is a struct that will provide the union of all possible subservices that they may require (logging, caching, db, etc.). The service registers a function with the central repository that can take that object, but can also take an interface that it defines with just a subset of the values.

This uses reflect and is nominally checked at run time, but over time more and more I am distinguishing between a runtime check that runs arbitrarily often over the execution of a program, and one that runs in an init phase. I have a command-line option on the main executable that runs the initialization without actually starting any services up, so even though it's a run-time panic if a service misregisters itself, it's caught at commit time in my pre-commit hook. (I am also moving towards worrying less about what is necessarily caught at "compile time" and what is caught at commit time, which opens up some possibilities in any language.)

The central service module also defines some convenient one-method interfaces that the services can use, so one service may look like:

    type myDependencies interface {
        services.UsesDB
        services.UsesLogging
    }

    func init() {
        services.Register(func(in myDependencies) error {
             // init here
        }
    }
and another may have

    type myDependencies interface {
        services.UsesLogging
        services.UsesCaching
        services.UsesWebCrawler
    }

    // func init() { etc. }
and in this way, each services declaring its own dependencies means each service's test cases only need to worry about what it actually uses, and the interfaces don't pollute anything else. This fully decouples "the set of services I'm providing from my modules" from "the services each module requires", and while I don't get compile-time checking that a module's service requirements are satisfied, I can easily get commit-time checking.

I also have some default fakes that things can use, but they're not necessary. They're just one convenient implementation for testing if you need them.


tbh this sounds pretty similar to go.uber.org/fx (or dig). or really almost any dependency injection framework, though e.g. wire is compile-time validated rather than run-time (and thus much harder for some kinds of runtime flexibility - I make no claim to one being better than the other).

DI frameworks, when they're not gigantic monstrosities like in Java, are pretty great.


Yes. The nice thing about this is that it's one function, about 20-30 lines, rather than a "framework".

I've been operating up to this point without this structure in a fairly similar manner, and it has worked fine in the tens-of-thousands-of-lines range. I can see maybe another order or two up I'd need more structure, but people really badly underestimate the costs of these massive frameworks, IMHO, and also often fail to understand that the value proposition of these frameworks often just boils down to something that could fit comfortably in the aforementioned 20-30 lines.


yeah, if it's only 20-30 lines then it's likely overkill to do any way except by hand.

most of the stuff I've done has involved at least 20-30 libraries, many of which have other dependencies and config, so it's on the order of hundreds or thousands of lines if written by hand. it's totally worth a (simple) DI tool at that point.


I'm interested, but when I see a large coordination system sitting on top of any language's primitives, I'm immediately curious what kind of overhead it has. Please add some benchmarks and allocation reports.


I really liked the README, that was a good use of AI.

If you're interested in the idea of writing a database, I recommend you checkout https://github.com/thomasjungblut/go-sstables which includes sstables, a skiplist, a recordio format and other database building blocks like a write-ahead log.

Also https://github.com/BurntSushi/fst which has a great Blog post explaining it's compression (and been ported to Go) which is really helpful for autocomplete/typeahead when recommending searches to users or doing spelling correction for search inputs.


>>I wrote a full text search engine in Go

>I really liked the README, that was a good use of AI.

Human intelligences, please start saying:

(A)I wrote a $something in $language.

Give credit where is due. AIs have feelings too.


> AIs have feelings too

Ohh boi, that’s exactly how the movie "Her" started! XD


tysm, i love this, FST is vv cool


I'm sure there are a lot of people like me that question if keeping bookmarks is actually worth it anymore. I personally found that taking action and writing a blurb about neat things I find helps me remember it more even if I never use the bookmark/paste/share directly again.

The more mental effort put towards something, the easier it is to remember.


This is the difference between a good idea and the implementation.

People just act differently in "official" topic channels.

It's like when you buy that super secure door lock and the lowest bid handyman bends it while installing because it's such a pain to align correctly and now it's just as vulnerable as any other lock.


yep, also doscoverability is not an issue with Slack. You can find most things with a search, people typically don't go scrolling through a channel to find something.


What a Poe's Law of a comment.

Slack's search is … okay … but there are any number of times when I have issues finding a thread I was looking at prior.

For all the AI hype that is the current time, search still can't a.) rank the alert bot that is just spamming the alerts channel as "not relevant" when "sorting by relevance" or b.) … find the thread when I use a synonym of an exact word in the thread.

Or the other day I was struggling to find an external channel. I figured it should be easy. But again, I chose a synonym of the name, so miss there, but I though still — by management edict, all of our external channels start with #external-, I'll just pull up all external channels and linear search by eyeball … but management had named this one #ext-…


> search still can't a.) rank the alert bot that is just spamming the alerts channel as "not relevant"

I find "Exclude automations" toggle to be good enough. But we might have very different workspaces, as I usually don't see the point of "sorting by relevance" at all: for my purposes, relevance is almost always better approximated by date than whatever Slack's ML team comes up with.


Yes - not only is Slack search underpowered, but also records management folks are likely to configure pruning of Slack content older than a couple years or so. This is IME less likely to be a problem with wiki pages.


People start by searching within a channel, especially when terms are vague or frequently used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: