Hacker Newsnew | past | comments | ask | show | jobs | submit | ers35's commentslogin


Maybe Cosmopolitan could be used with luastatic[1] to compile a Lua program to an "Actually Portable Executable":

  CC="" luastatic main.lua
  # Compile main.luastatic.c with Cosmopolitan Libc
[1] https://github.com/ers35/luastatic




Archive.is seems to be down at the moment.


Works fine here


Getting: Error 1001 - DNS resolution error


https://community.cloudflare.com/t/archive-is-error-1001/182...

tl;dr: cloudflare & archive.is: choose one.


I've been focusing on reading, but it looks like there is interest in writing too. The HN API doesn't support writing, so the username and password has to be stored. It's important that reading keeps working even if HN changes their backend in a way that breaks writing. I'll think about it.



There have been many HN related websites posted over the years, but a lot end up as dead links. A self-hosted version does not depend on a third party. Another reason is to minimize the round trip latency of contacting a central server. Consider users without a good connection to the server.


What is your use case for this server that makes optimizing for LAN vs WAN latency a valuable outcome? I’ve never really noticed that latency in email clients or when using the HN website and so I’m curious what is unique to your specific scenario that makes it a priority.

(This isn’t criticism, but I definitely don’t understand why it’s a criteria in your case yet.)


HN itself is fast, but the comparison I'm making is with readers that use the HN API. See how getting each comment requires an additional API request: https://hacker-news.firebaseio.com/v0/item/22481199.json?pri...

It's more about the consistency of operating on data local to the user. For example, see this comment referencing how HN paginates threads at 250 comments for performance reasons: https://news.ycombinator.com/item?id=22231055 A local database does not have that issue.


Ah, okay, so it's more about having to initiate thousands of requests for a single page than so much about the latency of any single request in those thousands — because even if they were all to a local server, that's still terribly inefficient, and with latency it's even worse. Thanks!


In case it's helpful: we're going to make a new HN API that will be much easier to use. The idea is that adding something like "json/" to any HN URL will return a JSON version of that page.

Curiously enough, we should also be able to eliminate the pagination of comments by then as well. Both changes are waiting on some software work that we expect to improve performance significantly. I don't know when this will all be done, but I hope it's this year.


That reminds me of how https://forum.dlang.org works: https://github.com/CyberShadow/DFeed It also uses SQLite.


You might find this interesting: https://github.com/donnemartin/haxor-news


Great idea. This could also be useful to post a mirror for other users if the site ends up going down.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: