Hacker Newsnew | past | comments | ask | show | jobs | submit | Fernicia's commentslogin

The only paid subscription getting ads is the one they created last week which is less than 50% of any other SOTA AI subscription on the market. Normal Pro users aren't getting ads.

Normal pro users aren't getting ads, yet.

Yet?

So a paid tier is getting ads got it.

> Reports said the “AI” was largely 1000+ people in India watching the cameras.

This was totally fake news though. Those people were labeling training data and reviewing low confidence labels, after the fact. There wasn't ever live monitoring of shoppers.


Has Gemini lost its ability to run javascript and python? I swear it could when it was launched by now its saying it hasn't the ability. Annoying regression when Claude and ChatGPT are so good at it.


This regression seems to have happened in the past few days. I suspected it was hallucinating the run and confirmed it by by asking Gemini to output the current date/time. The UTC it was reported was in the future from my clock. Some challenging mathematics were generating wrong results. Gemini will acknowledge something is wrong if you push it to explain the discrepancies, but can't explain it.


You've missed the sarcasm in the OP.

On a side note, the suggestion that police numbers don't affect crime is obviously false. We've seen what an arbitrarily large police presence does to Washington DC this year with the national guard deployment.


OpenAI keeping 4o available in ChatGPT was, in my opinion, a sad case of audience capture. The outpouring from some subreddit communities showed how many people had been seduced by its sycophancy and had formed proto-social relationships with it.

Their blogpost about the 5.1 personality update a few months ago showed how much of a pull this section of their customer base had. Their updated response to someone asking for relaxation tips was:

> I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately.

How does OpenAI get it so wrong, when Anthropic gets it so right?


> How does OpenAI get it so wrong, when Anthropic gets it so right?

I think it's because of two different operating theories. Anthropic is making tools to help people and to make money. OpenAI has a religious zealot driving it because they think they're on the cusp of real AGI and these aren't bugs but signals they're close. It's extremely difficulty to keep yourself in check and I think Altman no longer has a firm grasp on what it possible today.

The first principle is that you must not fool yourself, and you are the easiest person to fool. - Richard P. Feynman


I think even Altman himself must know the AGI story is bogus and there to continue to prop up the bubble.


I think the trouble with arguments about AGI is that they presume we all have similar views and respect for thought and human intelligence, while the scale is maybe wider than most would imagine. Its also maybe a bit bias selecting to make it through academia systems with high intellectual rigor to on average have more romantic or irrational ideas about impressive human intelligence and genius. But its also quite possible to view it as a pattern matching neural networks and filtering where much of it is flawed and even the most impressive results are from pretty inconsistent minds relying on recursively flawed internal critic systems, etc.

Looking at the poem in the article I would be more inclined to call the end human written because it seemed kind of crap like I expect from an eighth grader's poem assignments, but probably this is the lower availability of examples for the particular obsessions of the requestor.


I'm afraid he might be a true believer. The more money and/or power one gets, the fewer people push back against fanciful ideas or simply being wrong, and one can believe one is right about everything.


> How does OpenAI get it so wrong, when Anthropic gets it so right?

Are you saying people aren't having proto-social relationships with Anthorpic's models? Because I don't think that's true, seems people use ChatGPT, Claude, Grok and some other specific services too, although ChatGPT seems the most popular. Maybe that just reflects general LLM usage then?

Also, what is "wrong" here really? I feel like the whole concept is so new that it's hard to say for sure what is best for actual individuals. It seems like we ("humanity") are rushing into it, no doubt, and I guess we'll find out.


> Also, what is "wrong" here really?

If we're talking generally about people having parasocial relationships with AI, then yea it's probably too early to deliver a verdict. If we're talking about AI helping to encourage suicide, I hope there isn't much disagreement that this is a bad thing that AI companies need to get a grip on.


Yes, obviously, but you're right, I wasn't actually clear about that. Preventing suicides is concern #1, my comment was mostly about parent's comment, and I kind of ignored the overall topic without really making that clear. Thanks!


> and had formed proto-social relationships with it.

I think the term you're looking for is "parasocial."


Ah yes thank you


The only real results on Google are the article and this HackerNews post...


Did you search "vienam" with the quotes? duck duck go turns up a number of articles (albeit in at least one case the typo is in the metadata, not the article itself)


Couldn't you get around that by having a "zoom" feature on a very large but distant monitor?


Yes. You can make a low-resolution monitor (like 800x600px, once upon a time a usable resolution) and/or provide zoom and panning controls

I've tried that combination in an earlier iteration of Lenovo's smart glasses, and it technically works. But the experience you get is not fun or productive. If you need to do it (say to work on confidential documents in public) you can do it, but it's not something you'd do in a normal setup


Yes but that can create major motion sickness issues - motion that does not correspond top the user's actual physical movements create a dissonance that is expressed as motion sickness for a large portion of the population.

This is the main reason many VR games don't let you just walk around and opt for teleportation-based movement systems - your avatar moving while your body doesn't can be quite physically uncomfortable.

There are ways of minimizing this - for example some VR games give you "tunnel vision" by blacking out peripheral vision while the movement is happening. But overall there's a lot of ergo considerations here and no perfect solution. The equivalent for a virtual desktop might be to limit the size of the window while the user is zooming/panning.


For a small taste of what using that might be like turn on screen magnification on your existing computers. It's technically usable but not particularly productive or pleasant to use if you don't /have/ to use it.


Anti theft perhaps? Last March a guy was able to sneak onto a Delta flight by taking a picture of someone else's QR code. Some ticketing apps have temporal QR codes that are resistant to this exploit.


Wouldn't that be noticed when the actual passenger tried to board and the system said they were already on board?


They didn't verify the passenger's identity at the gate?


ID is never checked at the gate for domestic flights, only international, at least in the US.

This was the case being referenced: https://abc7.com/post/wicliff-yves-fleurizard-stowaway-secur...


I have never been asked to show ID on domestic flights at the gate.


For a period after 9/11, ID was required to be shown at the gate on domestic flights. I don't recall when that stopped, but it's been a while (and apparently long enough ago that apparently some have never had to do it).


I seem to recall IDs occasionally being checked at the gate prior to 9/11 as well. Memory is fuzzy, but they weren't checking boarding passes or ids at security. But back then I would always get my boarding pass at the check-in counter (sometimes exchanging an actual ticket for the boarding pass).


Memory is fuzzy…

Yeah, I can’t recall well enough to agree with you, can’t remember enough to dispute it, either. That was a long time ago. :-)


I spent some time looking for old information, best I could come up with was this article [1] from 1996 about requiring photo ID at check-in, which doesn't mention checking at the gate, but does mention why the airlines might be happy to do it (protect revenue by making sure passengers don't fly on other people's tickets... unless they share their name with the other person). This article [2] , also from 1996, is a little bit less precise about if people are denied at check-in or by gate agents.

I think it's all quite hazy, because if you had no checked bags and it was a small airport, but you might just go to the gate and try to get your boarding pass there. That and 24 years have passed. :D

[1] https://www.latimes.com/archives/la-xpm-1996-09-11-fi-42564-...

[2] https://www.chicagotribune.com/1996/12/29/no-match-no-flight...


You can reject all of these permission requests and the app still works.


"Other Data" and "Identifiers" too? Don't remember seeing requests for those in any application.


The app still works /for now/.


> The principles behind the free market are flawed

Can you go into specifics?


The so called "free market" (not to be confused with laissez faire) assumes perfect "information symmetry" and perfectly rational market participants, which is, effectively, impossible in this particular reality, and concerns itself mostly with marginal eventual state. It is a model.

E.g. the model "use VC money to subsidize cost until all competitors are bankrupt then hike prices to recoup" is not really reflected in this "free market"


> use VC money to subsidize cost until all competitors are bankrupt then hike prices to recoup

Can you give some examples of this happening in real life?

None of the examples I can think of where people criticised the companies for operating unprofitably, such as Amazon retail or Uber, were able to corner their markets.

Harvey Normans, Targets, Argos's, Walmarts, all still exist and compete with Amazon retail. Most towns still operate normal taxis services, Lyft, FreeNow, Bolt, all compete with Uber.

VC funding subsidising pricing, albeit temporarily, is still good for consumers. It doesn't seem to imply higher eventual prices. The opposite seems true, in fact.


> Can you give some examples of this happening in real life?

Austin had a local rideshare app that entered the scene when Uber/Lyft left the area because the city passed a law it failed to propagandize against called RideAustin. Non-profit, worked really well and paid well. When Uber and Lyft came back, they heavily subsidized the cost of doing business in Austin by both arbitrarily lowering prices and heavily juicing rewards for drivers. Conveniently, when RideAustin shut down because most drivers and riders had moved onto either app, these rewards started getting clawed back and prices went way back up.


> Can you give some examples of this happening in real life?

Uber is the canonical example of this, I guess.

> None of the examples I can think of where people criticised the companies for operating unprofitably, such as Amazon retail or Uber, were able to corner their markets.

It's not about people criticising this behavior or not. It's about being factored in the model. The free market model assumes that every participant in the market has the same access to capital, ensuring that every market participant can equally undercut everyone, making this particular strategy irrational, therefore not part of the model.


Just look at how hotel owners despise Booking.com.


Yes, there is nothing wrong with working hard and making money. But if you use that money against the rest of us, then we have a problem. Making a huge pile of money to corner a market is one of those scenarios, but there are many.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: