Hacker Newsnew | past | comments | ask | show | jobs | submit | dmbche's commentslogin

I'm fairly certain it's possible to extract psylocibin from the murshroom, giving the same advantages that the synthetic would have!

Edit0: for a more thorough look: https://www.mdpi.com/1424-8247/18/3/380


that's generally much more expensive

Than RnD for a brand new synthetic drug?

as if that's a guaranteed win. The low hanging fruit was to recreate what is already in nature. Creating something brand new never seen before would be a greenfield project that I'm sure most of bigPharma is not a fan of.

I'm not certain I catch your drift - I'm saying the RnD work they did to synthesize COM360 or whatever it's called is probably more expensive than using known means to synthesize/extract psylocibin (as psylocybin was first synthesized in the 50's)

Sounds to me as if you're now suggesting researching a new way to make a synthetic drug where before I read it as researching a new drug nobody has found yet

I'm not sure what you mean either way!

Have a good one I don't think we are in disagreement


"The more revealing signal is in the tail. The longest turns tell us the most about the most ambitious uses of Claude Code, and point to where autonomy is heading. Between October 2025 and January 2026, the 99.9th percentile turn duration nearly doubled, from under 25 minutes to over 45 minutes (Figure 1)."

That's just straight up nonsense, no? How much cherry picking do you need?


What do you think is wrong about this? It matches my experience pretty well.

Short window, small and unrepresentative data pool, cherry picking for 0.1% longest turn time without turn time being demonstrated as a proxy for autonomy.

Looks to me like fishing for some data that seems good.


Most tasks simply don't take that long.

Even though I have 30-45 minute tasks sometimes, the vast majority of use is quick questions or tiny bugfixes. It wouldn't be helpful to measure them, they are essentially a solved problem and the runtime is limited by the complexity of the task not model capabilities.


10/10 read

Parent comment seems sarcastic

I believe it's in reference to things like this:

https://steve-yegge.medium.com/gas-town-emergency-user-manua...


So what happens if the AI companies can't make money? I see more and more advances and breakthrough but they are taking in debt and no revenue in sight.

I seem to understand debt is very bad here since they could just sell more shares, but aren't (either valuation is stretched or no buyers).

Just a recession? Something else? Aren't they very very big to fall?

Edit0: Revenue isn't the right word, profit is more correct. Amazon not being profitable fucks with my understanding of buisness. Not an economist.


>taking in debt and no revenue in sight.

which companies don't have revenue? anthropic is at a run rate of 14 billion (up from 9B in December, which was up from 4B in July). Did you mean profit? They expect to be cash flow positive in 2028.


Yes thank you, mixing my brushes here - I remembered one of the companies having raised over 100b and having about 10b in revenue.

AI will kill SaaS moats and thus revenue. Anyone can build new SaaS quickly. Lots of competition will lead to marginal profits.

AI will kill advertising. Whatever sits at the top "pane of glass" will be able to filter ads out. Personal agents and bots will filter ads out.

AI will kill social media. The internet will fill with spam.

AI models will become commodity. Unless singularity, no frontier model will stay in the lead. There's competition from all angles. They're easy to build, just capital intensive (though this is only because of speed).

All this leaves is infrastructure.


Not following some of the jumps here.

Advertising, how will they kill ads any better than the current cat and mouse games with ad blockers?

Social Media, how will they kill social media? Probably 80% of the LinkedIn posts are touched by AI (lots of people spend time crafting them, so even if AI doesn't write the whole thing you know they ran the long ones through one) but I'm still reading (ok maybe skimming) the posts.


> Advertising, how will they kill ads any better than the current cat and mouse games with ad blockers?

The Ad Blocker cat and mouse game relies on human-written metaheuristics and rules. It's annoying for humans to keep up. It's difficult to install.

Agents/Bots or super slim detection models will easily be able to train on ads and nuke them whatever form they come in: javascript, inline DOM, text content, video content.

Train an anti-Ad model and it will cleanse the web of ads. You just need a place to run it from the top.

You wouldn't even have to embed this into a browser. It could run in memory with permissions to overwrite the memory of other applications.

> Social Media, how will they kill social media?

MoltClawd was only the beginning. Soon the signal will become so noisy it will be intolerable. Just this week, X's Nikita Bier suggested we have less than six months before he sees no solution.

Speaking of X, they just took down Higgsfield's (valued at $1.3B) main account because they were doing it across a molt bot army, and they're not the only ones. Extreme measures were the only thing they could do. For the distributed spam army, there will be no fix. People are already getting phone calls from this stuff.


> AI will kill SaaS moats and thus revenue. Anyone can build new SaaS quickly.

I'm LLM-positive but for me this is a stretch. Seeing it pop up all over media in the past couple weeks also makes me suspect astrofurfing. Like a few years back when there were a zillion articles saying voice search was the future and nobody used regular web search any more.


AI models will simply build the ads into the responses, seamlessly. How do you filter out ads when you search for suggestions for products, and the AI companies suggest paid products in the responses?

Based on current laws, does this even have to be disclosed? Will laws be passed to require disclosure?


What happens if oil companies can't make money? They will restructure society so they can. That's the essence of capitalism, the willingness to restructure society to chase growth.

Obviously this tech is profitable in some world. Car companies can't make money if we live in walking distance and people walk on roads.


They're using the ride share app playbook. Subsidize the product to reach market saturation. Once you've found a market segment that depends on your product you raise the price to break even. One major difference though is that ride share's haven't really changed in capabilities since they launched: it's a map that shows a little car with your driver coming and a pin where you're going. But it's reasonable to believe that AI will have new fundamental capabilities in the 2030s, 2040s, and so on.

I think you're looking for Facebook, not HN

Do you find this interesting to make and read?


I am probably the only person who ever willingly created a complete AI generated book and willingly read it front to cover. Last summer. I called it "Claude Code: A Primer" an Claude Code Origin Story. Good book, complete made up.

The technology is here, lets explore it. And when somebody states something in an HN comment. Lets just try it. Imperfect method. But better than to just talk Hypothetically about AI.

If AI will write better books than ever written until now? More insights than ever created before. Would we read it? Is it even possible? If not, why not. Whats is missing?

Thats the questions I find fascinating. I for one want to find out. With experimentation, not via predefined believes.


Not


You might like SuperCollider! It's free and a programming language made for sound design. Just writing code - but quite far from a DAW.


What an awful take


Why? Give me the awesome take


Non factual or interesting. No - ask an LLM I'm sure you'll be interested


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: