Hacker Newsnew | past | comments | ask | show | jobs | submit | materielle's commentslogin

It’s sort of surprising how naive developers still are given the countless rug pulls over the past decade or two.

You’re right on the money: the important thing to look at are the incentive structures.

Basically all tech companies from the post-great financial crisis expansion (Google, post Balmer Microsoft, Twitter, Instagram, Airbnb, Uber, etc) started off user-friendly but all eventually converged towards their investment incentive structure.

One big exception is Wikipedia. Not surprising since it has a completely different funding model!

I’m sure Anthropic is super user friendly now, while they are focused on expansion and founding devs still have concentrated policial sway. It will eventually converge on its incentive structures to extract profit for shareholders like all other companies.


I really think corporations are overplaying their hand if they think they can transform society once again in the next 10 years.

Rapid de industrialization followed by the internet and social media almost broke our society.

Also, I don’t think people necessarily realize how close we were to the cliff in 2007.

I think another transformation now would rip society apart rather than take us to the great beyond.


I worry that if the reality lives up to investors dreams it will be massively disruptive for society which will lead us down dark paths. On the other hand if it _doesn't_ live up to their dreams, then there is so much invested in that dream financially that it will lead to massive societal disruption when the public is left holding the bag, which will also lead us down dark paths.

It's already made it impossible to trust half of the content i read online.

Whenever i use search terms to ask a specific question these days theres usually a page of slop dedicated to the answer which appears top for relevancy.

Once i realize it is slop i realize the relevant information could be hallicinated so i cant trust it.

At the same time im seeing a huge upswing in probable human created content being accused of being slop.

We're seeing a tragedy of the information commons play out on an enormous scale at hyperspeed.


You trust nearly half??!!??

I think corporations can definitely transform society in the near future. I don't think it will be a positive transformation, but it will be a transformation.

Most of all, AI will exacerbate the lack of trust in people and institutions that was kicked into high gear by the internet. It will be easy and cheap to convince large numbers of people about almost anything.


I'm still not buying that AI will change society anywhere as much as the internet or smart phones for the matter.

The internet made it so that you can share and access information in a few minute if not seconds.

Smart phones build on the internet by making this sharing and access of information could done from anywhere and by anyone.

AI seems occupies the same space as google in the broader internet ecosystem.I dont know what AI provides me that a few hours of Google searches. It makes information retrieval faster, but that was the never the hard part. The hard part was understanding the information, so that you're able to apply it to your particalar situation.

Being able to write to-do apps X1000 faster is not innovation!


As a young adult in 2007, what cliff were we close to?

The GFC was a big recession, but I never thought society was near collapse.


We were pretty close to a collapse of the existing financial system. Maybe we’d be better off now if it happened, but the interim devastation would have been costly.

It felt like the entire global financial system had a chance of collapsing.

We weren't that far away from ATMs refusing to hand out cash, banks limiting withdrawals from accounts (if your bank hadn't already gone under), and a subsequent complete collapse of the financial system. The only thing that saved us from that was an extraordinary intervention by governments, something I am not sure they would be capable of doing today.

You are assuming that the change can only happen in the west.

The rest of the world has mostly been experiencing industrialisation, and was only indirectly affected by the great crash.

If there is a transformation in the rest of the world the west cannot escape it.

A lot of people in the west seem to have their heads in the sand, very much like when Japan and China tried to ignore the west.

China is the world's second biggest economy by nominal GDP, India the fourth. We have a globalised economy where everything is interlinked.


When I look at my own country it has proven to be open to change. There are people alive today who remember Christianity now we swear in a gay prime minister.

In that sense Western countries have proven that they are intellectualy very nimble.


Three of the best known Christians I have known in my life are gay. Two are priests (one Anglican, one Catholic). Obviously the Catholic priest had taken a vow of celibacy anyway to its entirely immaterial. I did read an interview of a celeb friend (also now a priest!) of his that said he (the priest I knew) thought people did not know he was gay we all knew, just did not make a fuss about it.

Even if you accept the idea that gay sex is a sin, the entire basis of Christianity is that we are all sinners. Possessing wealth is a failure to follow Jesus's commands for instance. You should be complaining a lot more if the prime minister is rich. Adultery is clearly a more serious sin than having the wrong sort of sex, and I bet your country has had adulterous prime ministers (the UK certainly has had many!).

I think Christians who are obsessed with homosexuality as somehow making people worse than the rest of us, are both failing to understand Christ's message, and saying more about themselves than gays.

If you look at when sodomy laws were abolished, countries with a Christian heritage lead this. There are reasons in the Christian ethos if choice and redemption for this.


> people alive today who remember Christianity now we swear in a gay prime minister

Why would that be a contradiction? Gay people can't be Christian?


Most of the core products at Google are still written in pre-C++11.

I wish these services would be rewritten in Go!

That’s where a lot of the development time goes: trying to make incredibly small changes that cause cascading bugs and regressions a massive 2000s C++ codebase that doesn’t even use smart pointers half the time.

Also, I think the outside world has a very skewed view on Go and how development happens at Google. It’s still a rather bottom up, or at least distributed company. It’s hard to make hundreds of teams to actually do something. Most teams just ignored those top-down “write new code in Go” directives and continued using C++, Python, and Java.


I wouldn't say most. Google is known for constantly iterating on its code internally to the point of not getting anything done other than code churn. While there is use of raw pointers, I'd argue it's idiomatic to still use raw pointers in c++ for non owning references that are well scoped. Using shared pointers everywhere can be overkill. That doesn't mean the codebase is pre c++11 in style.

Rewriting a codebase in another language that has no good interop is rarely a good idea. The need to replicate multiple versions of each internal library can become incredibly taxing. Migrations need to be low risk at Google scale and if you can't do it piecewise it's often not worth attempting either. Also worth noting that java is just as prevelant if not moreso in core products.


I think the problem is actually political capital.

Someone who deeply understands how to qualify the product.

But with enough political sway to tell entire orgs of 1000s employees to shred their timelines and planning docs and go back to the lab until it’s right.

Without those two pieces, the problem is that individual devs and leaders know that there’s a problem. But the KPIs and timelines must lurch onwards!


This just feels so backwards. Yes, I know recreating ambiguous issues is annoying because it’s a lot of work, but it’s also our job.

Reminder: we are asking users to give us money in exchange for software.

It’s our job to deliver that working software. It’s not the user’s job to hold our hands and pep talk us into fixing problems. Users can and should find another product that will just do it for them without the whining.

I think the real point of the website, besides joking around, is poking fun at the broke state of the software industry where a bunch of whiny developers and managers will make a million tired excuses for why their software doesn’t just work.

Highlighting bug report and bureaucratic process in response to “your keyboard is jank” is exactly the mindset we need to change.

The point isn’t to start a forum or technical conversation with Apple devs. The point is to laugh at them because their software sucks and “just one more Jira ticket” isn’t going to fix it.


Hopefully the published postmortem will announce that all features will be frozen for the foreseeable future and every last employee will be focused on reliability and uptime?

I don’t think GitHub cares about reliability if it does anything less than that.

I know people have other problems with Google, but they do actually have incredibly high uptime. This policy was frequently applied to entire orgs or divisions of the company if they had one outage too many.


I don’t think that the parent comment is saying all of the bugs would have been prevented by using Rust.

But in the listed categories, I’m equally skeptical that none of them would have benefited from Rust even a bit.


That’s not my point - just that “state machine races” is a too-broad category to say much about how Rust would or wouldn’t help.


Is that really true, though?

First off, you’re ignoring error bars. On average, frontier models might be 99.95% accurate. But for many work streams, there are surely tail cases where a series of questions only produce 99% accuracy (or even less), even in the frontier model case.

The challenge that businesses face is how to integrate these fallible models into reliable and repeatable business processes. That doesn’t sound so different than software engineering of yesteryear.

I suspect that as AI hype continues to level-off, business leaders will come to their senses and realize that it’s more marginally productive to spend on integration practices than squeaking out minor gains on frontier models.


Why isn’t it the governments role?

Because you think it’s not?

What if I, and many other people, think that it is?


Because it's ultimately a form of censorship. Governments shouldn't be in the business of shutting down speech some people don't like, and in the same way shouldn't be in the business of shutting down software features some people don't like. As long as nobody is being harmed, censorship is bad and anti-democratic. (And we make exceptions for cases of actual harm, like libelous or threatening speech, or a product that injures or defrauds its users.) Freedom is a fundamental aspect of democracy, which is why freedoms are written into constitutions so simple majority vote can't remove them.


1) Integration or removal of features isn't speech. And has been subject to government compulsion for a long time (e.g. seat belts and catalytic converters in automobiles).

2) Business speech is limited in many, many ways. There is even compelled speech in business (e.g. black box warnings, mandatory sonograms prior to abortions).


I said, "As long as nobody is being harmed". Seatbelts and catalytic converters are about keeping people safe from harm. As are black box warnings and mandatory sonograms.

And legally, code and software are considered a form of speech in many contexts.

Do you really want the government to start telling you what software you can and cannot build? You think the government should be able to outlaw Python and require you to do your work in Java, and outlaw JSON and require your API's to return XML? Because that's the type of interference you're talking about here.


Mandatory sonograms aren't about harm prevention. (Though yes, I would agree with you if you said the government should not be able to compel them.)

In the US, commercial activities do not have constitutionally protected speech rights, with the sole exception of "the press". This is covered under the commerce clause and the first amendment, respectively.

I assemble DNA, I am not a programmer. And yes, due to biosecurity concerns there are constraints. Again, this might be covered under your "does no harm" standard. Though my making smallpox, for example, would not be causing harm any more than someone building a nuclear weapon would cause harm. The harm would come from releasing it.

But I think, given that AI has encouraged people to suicide, and would allow minors the ability to circumvent parental controls, as examples, that regulations pertaining to AI integration in software, including mandates that allow users to disable it (NOTE, THIS DOESN'T FORCE USERS TO DISABLE IT!!), would also fall under your harm standard. Outside of that, the leaking of personally identifiable information does cause material harm every day. So there needs to be proactive control available to the end user regarding what AI does on their computer, and how easy it is to accidentally enable information-gathering AI when that was not intended.

I can come up with more examples of harm beyond mere annoyance. Hopefully these examples are enough.


Those examples of harm are not good ones.

The topic of suicide and LLMs is a nuanced and complex one, but LLMs aren't suggesting it out of nowhere when summarizing your inbox or calendar. Those are conversations users actively start.

As for leaking PII, that's definitely something for to be aware of, but it's not a major practical concern for any end users so far. We'll see if prompt injection turns into a significant real-world threat and what can be done to mitigate it.

But people here aren't arguing against LLM features based on substantial harms. They're doing it because they don't like it in their UX. That's not a good enough reason for the government to get involved.

(Also, regarding sonograms, I typed without thinking -- yes of course the ones that are medically unnecessary have no justification in law, which is precisely why US federal courts have struck them down in North Carolina, Indiana, and Kentucky. And even when they're medically necessary, that's a decision for doctors not lawmakers.)


> Those examples of harm are not good ones.

I emphatically disagree. See you at the ballot box.

> but it's not a major practical concern for any end users so far.

My wife came across a post or comment by a person considering preemptive suicide in fear that their ChatGPT logs will ever get leaked. Yes, fear of leaks is a major practical concern for at least that user.


Fear of leaks, or the other harms you mention, have nothing to do with the question at hand, which is whether these features are enabled by default.

If someone is using ChatGPT, they're using ChatGPT. They're not inputting sensitive personal secrets by accident. Turning Gemini off by default in Gmail isn't going to change whether someone is using ChatGPT as a therapist or something.

You seem to simply be arguing that you don't like LLM's. To which I'll reply: if they do turn out to present substantial harms that need to be regulated, then so be it, and regulate them appropriately.

But that applies to all of them, and has nothing to do with the question at hand, which is whether they can be enabled by default in consumer products. As long as chatgpt.com and gemini.google.com exist, there's no basis for asking the government to turn off LLM features by default in Gmail or Calendar, while making them freely available as standalone products. Does that make sense?


I think investors would certainly love this. So why hasn’t it already happened?

My guess: they would lose a ton of cultural cachet.

Turning OpenAI into an ads business is basically admitting that AGI isn’t coming down the pipeline anytime soon. Yes, I know people will make some cost-based argument that ads + agi is perfectly logical.

But that’s not how people will perceive things, and OpenAI knows this. And I think the masses have a point: if we are really a few years away from AGI replacing the entire labor force, then there’s surely higher margin businesses they can engage in compared to ads. Especially since they are allegedly a non-profit.

After Google and Facebook, nobody is buying the “just a few ads to fund operating costs” argument either.


Yup, it’s essentially an admission of failure. I think the people who were expecting AI to improve exponentially are disappointed by its current state, where it’s basically just a useful tool to assist workers in some highly specific fields.


Highly specific fields? They are trying to get you to reach for ai when an emailed “ok, thanks” would do. They want you to lose your ability to write and formulate thoughts without the tool. Then it is really over. That is the golden goose. Not a couple data scientists.


> it’s essentially an admission of failure

A multibillion dollar failure is fine by investors. Altman hasn’t been peddling the AGI BS to them. That’s aimed at the public and policymakers.


Is a trillion dollar failure okay with investors?


Aka you need them deep enough into the trap they can’t escape, before you trigger it.


Yes and there are layers. Remember when google ads had yellow backgrounds? I'm sure OpenAI will find a way to do ads "ethically"... for a while, until people get comfortable, and that's when they will start to make ChatGPT increasingly manipulative.


Gotta make that line go up and to the right!


> The goals of the advertising business model do not always correspond to providing quality search to users.

- Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine


Can anyone explain to me what ChatGPT does that traps people? I get the value as tools, I like using copilot, but ChatGPT doesn't offer me value that any other LLM can't. Given that everyone is quickly rolling "AI" into their own stuff, I don't see what's ChatGPT's killer app. If anything, I think Gemini is better positioned to capture the general user market.


They make it a habit to use them, by offloading that part of their thinking/process to them. It’s similar to Google Maps, or even Google itself.

When was the last time you went to an actual physical library, for instance? Or pulled out a paper map?

Gemini is a competitor, yes. But most people still go to Google at this point, even if there are a ton of competitors.

That is what the race is about (in large part), who can become ‘the norm’.


I also wouldn't underestimate Google's ability to nudge regular users towards whichever AI surface they want to promote. My highly non-technical mom recently told me she started using Google's "AI Mode" or whatever it's called for most of her searches (she says she likes how it can search/compare multiple sites for browsing house listings and stuff).

She doesn't really install apps and never felt a need to learn a new tool like "ChatGPT" but the transition from regular Google search to "AI Search" felt really natural and also made it easy to switch back to regular search if it wasn't useful for specific types of searches.

It definitely reduces cognitive load for an average user not needing to switch between multiple apps/websites to lookup hours/reviews via Google Maps, search for "facebook.com" to navigate to a site and now run AI searches all in the same familiar places on her phone. So I think Google is still pretty "sticky" despite ChatGPT being a buzzword everyone hears now that they caught up to OpenAI in terms of model capability/features.


> When was the last time you went to an actual physical library

My eyesight is making paper books harder and harder to read, so I don't go to libraries and bookstores as much as I used to. But I think libraries are still relatively popular with families, because they're sites of various community activities as well as safe, quiet places to let kids roam and entertain themselves while the parents are nearby.

When I was a kid, my parents went to the library much more often than they do now, because they were taking me and my sister there. And then we would all get books before we came home.

Not saying you're entirely wrong, but there's a significant part of this that is "changing rhythms of life as we age", not just "changing times".


It used to be, people went to the library to look things up, and as a primary source for finding information they needed. Not just as a community center.

That is my point.


> Gemini is a competitor, yes. But most people still go to Google at this point, even if there are a ton of competitors.

Yeah, that's my point. If Google is good enough I don't think people are going to want to do those extra steps, just as in your google maps example. There might be better services out there, but google maps are just too convenient.


The branding is so strong and it works well enough (I’d say, according to the perception of most people) that it’s just the first “obvious” choice.

Akin to nobody getting fired for choosing AWS, nobody would think poorly of you using ChatGPT.

I don’t think Claude has that same presence yet.

Google has a reputation for being a risk to develop with, and I think they flopped on marketing for general users. It’s hard to compete with “ChatGPT” where there’s a perceived call to action right in the name; You don’t really know what Gemini is for until it’s explained.


Would've happened if Claude and Gemini weren't things. But they are.

Regardless of AGI, being known as the only LLM that introduced ads sounds very bad.


It also is impossible to work properly: either they also screwup the entire API to break everyones programmatic access to coding and regular apps, or else everyone just starts making wrappers around the API to make without-ad-chatbots


Why would they need ads on API though, API is paid usage. They just need a few years of scaling for it to be profitable. Some models are already a net profit on API usage.


I agree but even if AGI is possible within 5-10 years it must be hard to justify maintaining or even increasing this level of burn for much longer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: