Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I actually pay 166 Euros a month for Claude Teams. Five seats. And I only use one. For myself. Why do I pay so much? Because the normal paid version (20 USD a month) interrups the chats after a dozen questions and wants me to wait a few hours until I can use it again. But Teams plan gives me way more questions.

But why do I pay that much? Because Claude in combination with the Projects feature, where I can upload two dozen or more files, PDFs, text, and give it a context, and then ask questions in this specific context over a period of week or longer, come back to it and continue the inquiry, all of this gives me superpowers. Feels like a handful of researchers at my fingertips that I can brainstorm with, that I can ask to review the documents, come up with answers to my questions, all of this is unbelievably powerful.

I‘d be ok with 40 or 50 USD a month for one user, alas Claude won’t offer it. So I pay 166 Euros for five seats and use one. Because it saves me a ton of work.



Kagi Ultimate (US$25/mo) includes unlimited use of all the Anthropic models.

Full disclosure: I participated in Kagi's crowdfund, so I have some financial stake in the company, but I mainly participated because I'm an enthusiastic customer.


I'm uninformed about this, it may just be superstition, but my feeling while using Kagi in this way is that after using it for a few hours it gets a bit more forgetful. I come back the next day and it's smart again, for while. It's as if there's some kind of soft throttling going on in the background.

I'm an enthusiastic customer nonetheless, but it is curious.


I noticed this too! It's dramatic in the same chat. I'll come back the next day, and even though I still have the full convo history, and it's as if it completely forgot all my earlier instructions.


Makes sense. Keeping the conversation implieas that each new message carries the whole history, again. You need to create new chats from time to time, or throttle to a different model...


This is my biggest gripe with these LLMs. I primarily use Claude, and it exhibits the same described behavior. I'll find myself in a flow state and then somewhere around hour 3 it starts to pretend like it isn't capable of completing specific tasks that it had been performing for hours, days, weeks. For instance, I'm working on creating a few LLCs with their requisite social media handles and domain registrations. I _used_ to be able to ask Claude to check all US State LLC registrations, all major TLD domain registrations, and USPTO against particular terms and similar derivations. Then one day it just decided to stop doing this. And it tells me it can't search the web or whatever. Which is bullshit because I was verifying all of this data and ensuring it wasn't hallucinating - which it never was.


Could it be that you're running out of available context in the thread you're in?


Doubtful. I started new threads using carbon-copy prompts. I'll research some more to make sure I'm not missing anything, though.


Did you ever read Accelerando? I think it involved a large number of machine generated LLCs...


No, but I'll give the wikipedia summary a gander :)


Is that within the same chat?


The flow lately has been transforming test cases to accommodate interface changes, so I'm not asking it to remember something from several hours ago, I'm just asking it to make the "same" transformation from the previous prompt, except now to a different input.

It struggles with cases that exceed 1000 lines or so. Not that it loses track entirely at that size, it just starts making dumb mistakes.

Then after about 2 or 3 hours, the size at which it starts to struggle drops to maybe 500. A new chat doesn't seem to help, but who can say, it's a difficult thing to quantify. After 12 hours, both me and the AI are feeling fresh again. Or maybe it's just me, idk.

And if you're about to suggest that the real problem here is that there's so much tedious filler in these test cases that even an AI gets bored with them... Yes, yes it is.


> Kagi Ultimate (US$25/mo) includes unlimited use of all the Anthropic models.

What am I losing here if I switch over to this from my current Claude subscription?


You'll also lose the opportunity to use the MCP integration of Claude Desktop. It's still early on but this has huge potential


Claude projects mostly. Kagi’s assistant AI is a basic chat bot interface.


but why would clauda offer this cheaper from a third party?


It probably isn’t cheaper for Kagi per token but I assume most people don’t use up as much as they can, like with most other subscriptions.

I.e. I’ve been an Ultimate subscriber since they launched the plan and I rarely use the assistant feature because I’ve got a subscription to ChatGPT and Claude. I only use it when I want to query Llama, Gemini, or Mistral models which I don’t want to subscribe to or create API keys for.


Thanks for sponsoring my extensive use of Claude via Kagi.


Thanks for the tip! Now I'm a Kagi user too.


How would you rate Kagi Ultimate vs Arc search? IE is it scraping relevant websites live and summarising them? Or is it just access to ChatGPT and other models (with their old data).

At some point I'm going to subscribe to Kagi again (once I have a job) so be interested to see how it rates.


I've never tried Arc search, so I couldn't say.

I think it's all the LLMs + some Kagi-specific intelligence on top because you can flip web search on and off for all the chats.


I presume no access to Anthropic project?


I bet you never get tired of being told LLMs are just statistical computational curiosities.


There are people like that. We don't know what's up with them.


It's pretty easy to explain. You see, they're unable to produce a response that isn't in their training data. They're stochastic parrots.


They extract concepts from their training data and can combine concepts to produce output that isn't part of their training set, but they do require those concepts to be in their training data. So you can ask them to make a picture of your favorite character fighting mecha on an alien planet and it will produce a new image, as long as your favorite character is in their training set. But the extent it imagines an alien planet or what counts as mecha is limited by the input it is trained on, which is where a human artist can provide much more creativity.

You can also expand it by adding in more concepts to better specify things. For example you can specify the mecha look like alphabet characters while the alien planet expresses the randomness of prime numbers and that might influence the AI to produce a more unique image as you are now getting into really weird combinations of concepts (and combinations that might actually make no sense if you think too much about them), but you also greatly increase the chance of getting trash output as the AI can no longer map the feature space back to an image that mirrors anything like what a human would interpret as having a similar feature space.


The paper that coined the term "stochastic parrots" would not agree with the claim that LLMs are "unable to produce a response that isn't in their training data". And the research has advanced a _long_ way since then.

[1]: Bender, Emily M., et al. "On the dangers of stochastic parrots: Can language models be too big?." Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.



/facepalm. Woosh indeed. Can I blame pronoun confusion? (Not to mention this misunderstanding kicked off a farcically unproductive ensuing discussion.)


it's just further evidence that we're also stoichastic parrots :)


That is why we invented God.


Woosh.


Please clarify what you mean. On what basis do you say this?

Unless I’m misunderstanding, I disagree. If you reply, I’ll bet I can convince you.


Unless you have full access to the entirety of their training data, you can try to convince all you want, but you're just grasping at straws.

LLMs are stochastic parrots incapable of thought or reasoning. Even their chains of thoughts are part of the training data.


When combined with intellectual honesty and curiosity, the best LLMs can be powerful tools for checking argumentation. (I personally recommend Claude 3.5 Sonnet.) I pasted in the conversation history and here is what it said:

> Their position is falsifiable through simple examples: LLMs can perform arithmetic on numbers that weren't in training data, compose responses about current events post-training, and generate novel combinations of ideas.

Spot on. It would take a lot of editing for me to speak as concisely and accurately!


Your use of the word stochastic here negates what you are saying.

Stochastic Generative models can generate new and correct data if the distribution is right. Its in the definition


> you can try to convince all you want, but you're just grasping at straws.

After coming back to this to see how the conversation has evolved (it hasn't), I offer this guess: the problem isn't at the object level (i.e. what ML research has to say on this) nor my willingness to engage. A key factor seems to a lack of interest on the other end of the conversation.


Most importantly, I'm happy to learn and/or be shown to be mistaken.

Based on my study (not at the Ph.D. level but still quite intensive), I am confident the comment above is both wrong and poorly framed. Why? Seeing phrases "incapable of thought" and "stochastic parrots" are red flags to me. In my experience, people that study LLM systems are wary of using such brash phrases. They tend to move the conservation away from understanding towards combativeness and/or confusion.

Being this direct might sound brusque and/or unpersuasive. My top concern at this point, not knowing you, is that you might not prioritize learning and careful discussion. If you want to continue discussing, here is what I suggest:

First, are you familiar with the double-crux technique? If not, the CFAR page is a good start.

Second, please share three papers (or high-quality writing from experts): one that supports your claim, one that opposes it, and one that attempts to synthesize.

Third, perhaps we can find a better forum.


I'll try again... Can you (or anyone) define "thought" in way that is helpful?

Some other intelligent social animals have slightly different brains, and it seems very likely they "think" as well. Do we want to define "thinking" in some relative manner?

Say you pick a definition requiring an isomorphism to thoughts as generated by a human brain. Then, by definition, you can't have thoughts unless you prove the isomorphism. How are you going to do that? Inspection? In theory, some suitable emulation of a brain is needed. You might get close with whole-brain emulation. But how do you know when your emulation is good enough? What level of detail is sufficient?

What kinds of definitions of "thought" remains?

Perhaps something related to consciousness? Where is this kind of definition going to get us? Talking about consciousness is hard.

Anil Seth (and others) talks about consciousness better than most, for what it is worth -- he does it by getting more detailed and specific. See also: integrated information theory.

By writing at some length, I hope to show that using loose sketches of concepts using words such as "thoughts" or "thinking" doesn't advance a substantive conversation. More depth is needed.

Meta: To advance the conversation, it takes time to elaborate and engage. It isn't easy. An easier way out is pressing the down triangle, but that is too often meager and fleeting protection for a brittle ego and/or a fixated level of understanding.


Can you?


Sometimes, I get this absolute stroke of brilliance for this idea of a thing I want to make and it's gonna make me super rich, and then I go on Google, and find out that there's already been a Kickstarter for it and it's been successful, and it's now a product I can just buy.

So apparently not.


I feel like everyone missed your joke :)


at least you did!


No, but then again you're not paying me $20 per month while I pretend I have absolute knowledge.

You can, however, get the same human experience by contracting a consulting company that will bill you $20 000 per month and lie to you about having absolute knowledge.


Unironically, thank you for sharing this strategy. I get throttled a lot, and I'm happy to pay to remove those frustrating limits.


Sounds like you two could split the cost of the family plan-- ahem the team plan.


and share private questions with each other


Training with Transparency


Pay as you go using the anthropic API and an open source UI frontend like librechat would be a lot cheaper I suspect.


Depends on how much context he loads up into the chat. The web version is quite generous when compared to the API, from my estimations.


You.com (search engine and LLM aggregator) has a team plan for $25/month.

https://you.com/plans


I have ChatGPT ($20/month tier) and Claude and I absolutely see this use case. Claude is great but I love long threads where I can have it help me with a series of related problems over the course of a day. I'm rarely doing a one-shot. Hitting the limits is super frustrating.

So I understand the unlimited use case and honestly am considering shelling out for the o1 unlimited tier, if o1 is useful enough.

A theoretical app subscription for $200/month feels expensive. Having the equivalent a smart employee work beside me all day for $200/month feels like a deal.


Yep, I have 2 accounts I use because I kept hitting limits. I was going to do the Teams to get the 5x window, but I got instantly banned when clicking the teams button on a new account, so I ended up sticking with 2 separate accounts. It's a bit of a pain, but I'm used to it. My other account has since been unbanned, but I haven't needed it lately as I finished most of my coding.


Have you tried NotebookLM for something like this?


Isn’t that Google’s garbage models only?


Whats garbage about it?


1. Hallucinates more than any other model (Gemini flash/pro 1,1.5, 1121).

2. Useless with large context. Ignores, forgets, etc.

3. Terrible code and code understanding.

Also this is me hoping it would be good and looking at it with rose tinted glasses because I could use cloud credits to run it and save money.


NotebookLM is designed for a distinct use case compared to using Gemini's models in a general chat-style interface. It's specifically geared towards research and operates primarily as a RAG system for documents you upload.

I’ve used it extensively to cross-reference and analyse academic papers, and the performance has been excellent so far. While this is just my personal experience (YMMV), it’s far more reliable and focused than Gemini when it comes to this specific use case. I've rarely experienced a hallucination with it. But perhaps that's the way I'm using it.


Can you detail how you use NotebookLM for academic papers?

I've looked into it, but as usual with LLM I feel like I'm not getting much out of it due to lack of imagination when it comes to prompting.


Have you tried LibreChat https://www.librechat.ai/ and just use it with your own API keys? You pay for what you use and can use and switch between all major model providers


Why not use the API? You can ask as many questions as you can pay for.


I haven’t implemented this yet, but I’m planning on doing a fallback to other Claude models when hitting API limits, IIUC they rate limit per model


Do you not have any friends to share that with? Or share a family cell phone plan or Netflix with?


They're probably lay an adult, so I would guess not.


Out of curiosity, why don't you use NotebookLM for the same functionality?


Are the limits applied to the org or to each individual user?


Individual users


And how often is it wrong?


Try typingmind.com with the API


A great middle ground




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: