Hacker Newsnew | past | comments | ask | show | jobs | submit | pistoriusp's commentslogin

Hey! Snaplet founder here. Want to clarify that it was not acquired by Supabase; I shutdown the startup and found roles for some of the team at Supabase.

The code remains:

- https://github.com/supabase-community/seed - https://github.com/supabase-community/copycat - https://github.com/supabase-community/snapshot

This looks like a great project, wishing them all the best on the journey.


Thanks!! means a lot coming from you. Best of luck at Supabase.


Thanks, but I am not at Supabase! I ended up going back to building RedwoodJS and took over the project, and now have a consultancy.


Do you use a local/ free model?


I am currently using a local model qwen3:8b running on a 2020 (2018 intel chip) Mac mini for classifying news headlines and it's working decently well for my task. Each headline takes about 2-3 seconds but is pretty accurate. Uses about 5.3 gigs of ram.


Can you expand a bit on your software setup? I thought running local models was restricted to having expensive GPUs or latest Apple Silicon with unified memory. I have a Intel 11th gen home server which I would like to use to run some local model for tinkering if possible.


Those little 4B and 8B models will run on almost anything. They're really fun to try out but severely limited in comparison to the larger ones - classifying headlines to categories should work well but I wouldn't trust them to refactor code!

If you have 8GB of RAM you can even try running them directly in Chrome via WebAssembly. Here's a demo running a model that's less than 1GB to load, entirely in your browser (and it worked for me in mobile safari just now): https://huggingface.co/spaces/cfahlgren1/Qwen-2.5-WebLLM


It really is a very simple setup. I basically had an old Intel based Mac mini from 2020. The intel chip inside it is from 2018). It's a 3 GHz 6-core Core i5. I had upgraded the ram on it to 32 GB when I bought it. However, the ollama only uses about 5.5 gigs of it. So it can be run on 16gb Mac too.

The Qwen model I am using is fairly small but does the job I need it to for classifying headlines pretty decently. All I ask it to do is whether a specific headline is political or not. It only responds to me with in True or False.

I access this model from an app (running locally) using the `http://localhost:11434/api/generate` REST api with `think` set to false.

Note that this qwen model is a `thinking` model. So disabling it is important. Otherwise it takes very long to respond.

Note that I tested this on my newer M4 Mac mini too and there, the performance is a LOT faster.

Also, on my new M4 Mac, I originally tried using the Apple's built in Foundation Models for this task and while it was decent, many times, it was hitting Apple's guardrails and refusing to respond because it claimed the headline was too sensitive. So I switched to the Qwen model which didn't have this problem.

Note that while this does the job I need it to, as another comment said, it won't be much help for things like coding.


It's really just a performance tradeoff, and where your acceptable performance level is.

Ollama, for example, will let you run any available model on just about any hardware. But using the CPU alone is _much_ slower than running it on any reasonable GPU, and obviously CPU performance varies massively too.

You can even run models that are bigger than available RAM too, but performance will be terrible.

The ideal case is to have a fast GPU and run a model that fits entirely within the GPU's memory. In these cases you might measure the model's processing speed in tens of tokens per second.

As the idealness decreases, the processing speed decreases. On a CPU only with a model that fits in RAM, you'd be maxing out in the low single digit tokens per second, and on lower performance hardware, you start talking about seconds over token instead. If the model does not fit in RAM, then the measurement is minutes per token.

For most people, their minimum acceptable performance level is in the double digit tokens per second range, which is why people optimize for that with high-end GPUs with as much memory as possible, and choose models that fit inside the GPU's RAM. But in theory you can run large models on a potato, if you're prepared to wait until next week for an answer.


+1

> It's really just a performance tradeoff, and where your acceptable performance level is.

I am old enough to remember developers respecting the economics of running the software they create.

Ollama running locally paired occasionally with using Ollama Cloud when required is a nice option if you use it enough. I have twice signed up and paid $20/month for Ollama Cloud, love the service, but use it so rarely (because local models so often are sufficient) that I cancelled both times.

If Ollama ever implements a pay as you go API for Ollama Cloud, then I will be a long term customer. I like the business model of OpenRouter but I enjoy using Ollama Cloud more.

I am probably in the minority, but I wish subscription plans would go away and Claude Code, gemini-cli, codex, etc. would all be only available pay as you go, with ‘anti dumping’ laws applied to running unsustainable businesses.

I don’t mean to pick on OpenAI, but I think the way they fund their operations actually helps threaten the long term viability of our economy. Our government making the big all-in bet on AI dominance seems crazy to me.


Yes, for the little it's good I'm currently using LMStudio with varying models


I saw a meme that I think about fairly often: Great apes have learnt sign language, and communicated with humans, since the 1960's. In all that time they've never asked human questions. They've never tried to learn anything new! The theory is that they don't know that there are entities that know things they don't.

I like to think that AI are the great apes of the digital world.


Its worth noting that the idea that great apes have learnt sign language is largely a fabrication by a single person, and nobody has ever been able to replicate this. All the communication has to be interpreted through that individual, and anyone else (including people that speak sign language) have confirmed that they're just making random hand motions in exchange for food

They don't have the dexterity to really sign properly


Citation needed.


https://en.wikipedia.org/wiki/Great_ape_language#Criticism_a... - Not word for word, but certainly casting doubt that apes were ever really communicating in the way that people may have thought.


That article does completely refute 20k's claim that it was all done by one person though.


The way linguists define communication via language? Sure. Let's not drag the rest of humanity into this presumption.


You only need a citation for the idea that apes aren't able to speak sign language?


They claimed fraud by a single person, with zero replication. That’s both testable so they should be able to support it.

At the very least, more than one researcher was involved and more than one ape was alleged to have learned ASL. There is a better discussion about what our threshold is for speech, along with our threshold for saying that research is fraud vs. mistaken, but we don’t fix sloppiness by engaging in more of it.


SO why wasn't the research continued further if results were good? My assumption is it was because of the - Fear of the Planet of Apes!


Searching for koko ape fraud seems to produce a lot.


> In his lecture, Sapolsky alleges that Patterson spontaneously corrects Koko’s signs: “She would ask, ‘Koko, what do you call this thing?’ and [Koko] would come up with a completely wrong sign, and Patterson would say, ‘Oh, stop kidding around!’ And then Patterson would show her the next one, and Koko would get it wrong, and Patterson would say, ‘Oh, you funny gorilla.’ ”

More weirdly was this lawsuit against Patterson:

> The lawsuit alleged that in response to signing from Koko, Patterson pressured Keller and Alperin (two of the female staff) to flash the ape. "Oh, yes, Koko, Nancy has nipples. Nancy can show you her nipples," Patterson reportedly said on one occasion. And on another: "Koko, you see my nipples all the time. You are probably bored with my nipples. You need to see new nipples. I will turn my back so Kendra can show you her nipples."[47] Shortly thereafter, a third woman filed suit, alleging that upon being first introduced to Koko, Patterson told her that Koko was communicating that she wanted to see the woman's nipples

There was a bonobo named Kanzi who learned hundreds of lexigrams. The main criticism here seems to be that while Kanzi truly did know the symbol for “Strawberry” he “used the symbol for “strawberry” as the name for the object, as a request to go where the strawberries are, as a request to eat some strawberries”. So no object-verb sentences and so no grammar which means no true language according to linguists.

https://linguisticdiscovery.com/posts/kanzi/


> So no object-verb sentences and so no grammar which means no true language

Great distinction. The stuff about showing nipples sounds creepy.


I mean dogs can learn a simple sign language?


Can the dogs sign back? Even dogs that learn to press buttons are mostly just pressing them to get treats. They don't ask questions, and it's not really a conversation.


They can like barf as part of a trick and do "thing we are searching for is in that direction" etc but not very abstract communications.


> The theory is that they don't know that there are entities that know things they don't.

This seems like a rather awkward way of putting it. They may just lack conceptualization or abstraction, making the above statement meaningless.


The exact title of the capacity is 'theory of mind' - for example, chimpanzees have a limited capacity for it in that they can understand others' intentions, but they seemingly do not understand false beliefs (this is what GP mentioned).

https://doi.org/10.1016/j.tics.2008.02.010


Theory of mind is a distinct concept that isn't necessary to explain this behavior. Of course, it may follow naturally, but it strikes me as ham-fisted projection of our own cognition onto others. Ironically, a rather greedy theory of mind!


If apes started communicating mongs themselves with sign-language they learned from humans that would measn they would get more practice using it and they could evolve it over aeons. Hey, isn't that what actually happened?


Does that means intelligent is soul? Then we will never achieve AGI.


I wrote this last week:

> We spent a decade rebuilding the browser by hijacking routing, manually syncing state, rebuilding forms and transitions in JavaScript to match native app expectations. Now the browser has caught up. It's time to stop the hacks and build on the web again, but properly.

We've been "holding the browser wrong" for the past 10 years.


None of what you wrote changed in the past 10 years. You still need to do all of that for app-like behaviour.

Unless you're arguing that we should stop cramming app-like behaviour into a system that doesn't support it. Then I'm with you.


Good point, I think I should add that to the article.


We should chat. We're busy building a store for source code.


RedwoodSDK is the successor to RedwoodJS. We've rebranded RedwoodJS as "Redwood GraphQL" and built this new thing from scratch. We are the same people with the same ambitions, but more focused. With a narrow niche. I believe that a framework requires a platform to be competitive today.

Because of AI, the difficulty in writing code is greatly reduced. And because of platforms, the difficulty of shipping to production is greatly reduced.

That combination can be really great for your velocity when trying to build a business.


I can’t tell if serious or not, but I threw up in my mouth a little regardless.


Dead serious.


I’m now similarly conflicted between wanting to say congratulations or condolences.


I guess it would be heplful for me to understand what you're basing it on, then I can guide you in the most appropriate direction!


No you didn't rebrand RedwoodJS. You just silently abandoned it. The doc is not even be revised for the proclaimed new name. Paying for vendor to be locked-in into well-funded platform should not be considered as a bad choice after all.


redwoodjs.com has a very clear letter of intent on the homepage. Our community forum has a description. Our document has a header that describes this.

I'm not sure that you really appreciate that RedwoodJS is open source, and if you cared as much as you're pointing out to care, then you would have opened a bunch of PR requests to help us out. I'm still open to that, so please why don't you help us?


The clarity was not available before the flashy new website and video for SDK are disclosed. Everything was kept in secret from the community for half a year.


So this is an outsiders perspective, and I would love to give you a first hand account. I've invited you to chat on a video-call.


Thanks, that does make sense, I just hadn't heard about the new direction/rebranding.

Agreed on both of your assesments Best of luck!


Thank you! We're getting a lot of love! Just need to solve distribution. ;P


if you've made it to 10 years, that's a really nice problem to have (or maybe a really boring problem to have). But if you're just starting out, I want to prevent you from making all those decisions when you could really be focusing on your startup instead.

So I'm trying to encourage you to consider picking a platform and just sticking with the tools of the platform rather than bundling it yourself together.


You can stick with it without legally getting trapped by some shady BD getting their sales quota filled


Or like how atlassian suddenly started charging a "serverless event" fee for their various workflow automation tasks a couple years ago basically slapping a $5-$10k markup on a product you already own. Thats just for small companies I'm sure there are enterprise customers that are paying six figures for a previously free feature and shrug it off. Like you said a good problem to have.


Thanks! I think you summarized exactly what I was saying, and pointing out that I had a clickbait title. My goal here is if you're starting out with something new, consider reaching for a platform rather than a bunch of services.

The reason why I really love Cloudflare is because of their bindings. A lot of the time you are simply using fetch, so request and response to interact with their services. It feels as if fetch has become like the Unix pipe of the web.


> But as a business owner, this is a stupid reason to choose a solution. Choose solutions that will support the business and give it flexibility to change over time.

I agree with you. If you're starting out, if your business is not profitable, don't pick SaaS. Don't take the time and pay those 5 taxes. Rather just use the platform, and if you're scalable and profitable and growing, pick some other technology that supports you in the long run.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: