oh boy I wrote one of these many years ago for HN.
Within like an hour or two pg emailed me asking me to stop. I didn't know it at the time, but HN was being run on a rusty potato and scraping the homepage every 5 or 10 seconds was causing significant load.
Afaict, HN is still running on a rusty potato. The software's written well, so it doesn't need to run on more than that. (What's going to happen to it? Someone links to it from HN?)
> one of the more important and trafficked properties on the Internet.
I like HN, but it's really only important within a very niche subset of the Internet, and it also doesn't have much traffic. There's like a single post submitted every two minutes. That's not much.
I think [being unable to handle peak load] does not imply efficiency. Unless perhaps this is a joke that I’m misunderstanding?
Nonetheless, i suspect that HN probably is quite efficient, just based on what I know about dang. Even so, the parent claim was that it was popular and important, not that it was efficient
(just making sure it's obvious - I didn't create this site! all credits to @jerbear4328 who is here on HN - I'll email them to let them know it's trending) :-)
Any tips on respectfully crawling HN so you don’t get throttled? I had an application idea that could not be served by the API (need karma values) so I started to write code to scrape but got rate limited pretty quickly.
I've had no trouble hitting the Firebase API at the speed items are created, with a 5 second delay between retries.
For scraping HN directly, in my experience you have to go extremely slow, like 1 minute between fetching items. And if you get blocked, it may be better to wait a long time (minutes) before trying again rather than exponential backoff, in order to get out of the penalty box. You'll need a cache for sure.
{
"by" : "jkarneges",
"id" : 45533018,
"kids" : [ 45533616 ],
"parent" : 45532549,
"text" : "The HN/Firebase API doesn't make this easy. For <a href=\"https://hnstream.com\" rel=\"nofollow\">https://hnstream.com</a> I ended up crawling items to find the article.",
"time" : 1760043552,
"type" : "comment"
}
"parent" can either be the actual parent comment or the parent article, depending where in the comment chain you are.
As does hnstream.com from the sourced sample comment itself. Both just traverse the parent id until it's the root (article). It takes more queries, but the API is not rate limited.
It wouldn't take more queries if the comments were cached. It could probably be done entirely in memory, HN's entire corpus can't be that large.
If one were to start at the page endpoints (eg /topstories) one could add references to origin ids while preloading comments, and probably cover the most likely to be referenced ID, and even make traversal up the tree even more efficient.
I tried this, but it required making a request for every comment and would probably call for a backend, wheras this can run just off of the Firebase websocket stream on a static HTML file.
If you want a live version of most of the site (including tracking new comments in items you've already looked at), I made this to improve my React skills when the HN API first came out:
I just wish there was a dark mode.. Let me install night reader to see how well it looks :p
Edit: would've loved if there was a way to sign up or make comments from there, oh well :< I wish there was, I am not sure if that's possible tho but I hope it could be.
I checked the github repo and I found out that its a single static html page and ahh I forgot to see that its hosted on github.io as well because of their static hosting, somehow I didn't see it!!
I moderated a few chat rooms and dreamed of all this complex tooling I could build to automate and help me fix issues right away. Of course, the tiny chat rooms didn't need any of that. The hard part is actually moderating fairly and trying to figure out where the boundaries are. I couldn't do it. Makes sense why pg said "don't start a forum".
I know they get an alert whenever a comment is flagged. I assume they get an alert whenever certain "problematic" accounts post. Maybe they get an alert whenever the flamewar detector goes off? This forum is written in a Lisp so they can probably hotpatch whatever code they want and add whatever filters they want.
But dang has said numerous times that he doesn't read everything, that would probably not even be feasible.
The question is, YC being what it is, are they using LLMs to automate moderation or do sentiment analysis?
We're working on the latter but it's not yet in production. We're only going to deploy it once it's clear that it significantly improves on what mods + the existing software are already doing. The hard part, of course, is how to evaluate that.
Another great use of the HN API! It would be nice to filter it somehow to threads where I have commented, sometimes I don't notice until days later that someone replied to me.
Didn't know about this! Thanks, I just wish something like this was on signal or discord but maybe that's because I don't operate much mail and I really like signal...
But anyways, just as a headsup for a moment I was confused that I wasn't getting the verification mail and I was literally going to comment it but then I got the mail just in time, so maybe to anyone out there, maybe be a bit patient as its worth it (or it could totally just be an issue from my side as I spammed it twice or maybe it got congested I am not sure)
EDIT: It wasn't that they sent me 2 mails but rather that they sent me 1 mail and I clicked on it and then got another and I thought it was because of me spamming verification twice but it turns out that they sent me a mail because someone had responded to me :)
Yeah, you need to give it a moment. In active discussions I usually see the reply before getting the notification. I guess the service needs a bit to detect responses, and then each email need a bit of time as well.
Pretty cool! I must manually refresh to see new posts. Implementing real-time updates (e.x. via WebSocket or Server-Sent Events) would significantly improve usability.
It sounds cool, but for usability its not great. Think about how reddit recalculates the results order as you're paging through, and you see items from the previous page show up on the next. Now imagine that happening in realtime. Maybe there's a link you want to read, but you get pulled away for 10 minutes. By the time you get back the link is higher or lower, or may be completely missing.
That is a good point, however there are design solutions around it. For example:
- Poll every X seconds instead of real-time
- Enable user to toggle real-time mode
- Load new posts in with a "+" button at the top to fetch the latest posts (like Twitter)
That's a good idea. Maybe bind that to a key (what about Ctrl-R). And maybe you can put this feature into UAs, since it would be useful for more websites.
It (hopefully) took exactly 30 seconds, the page delays every item until 30 seconds after its posted date. It doesn't poll HN's server, it opens a websocket to the official HN Firebase, and without the delay, items appear in large chunks. I'm pretty sure the HN server syncs with Firebase every 30 seconds, so this is as fast as it can go while still being accurate.
It works, but I'd advise about using the same branding. Consider making it significantly different than the website's comments page to avoid any issues, since it's not part of the webapp nor sanctioned.
Okay, maybe you could explain why copying the look of a product to look exactly like the said product, without it being warranted is a worth ignoring and funny?
a very very quick copy/paste of 73 comments there - fed to claude with a basic "characterize+count comments" give me 70% neutral/technical, 14% positive, 16% negative.
claude also said "The negative comments tend to be more substantive critiques of systems or research approaches rather than personal attacks." - so... that's good? :-)
I've done it before on large threads. Neutral sentiment tends to dominate at roughly 50% but negative sentiment is much higher than positive sentiment. Negative sentiment is also higher if you just take the highest ranked comments.
It's actually quite stark looking at it this way and makes me question how healthy it is to use this site as often as I do. Fully aware that we're also piling on the negative sentiment.
oh boy I wrote one of these many years ago for HN.
Within like an hour or two pg emailed me asking me to stop. I didn't know it at the time, but HN was being run on a rusty potato and scraping the homepage every 5 or 10 seconds was causing significant load.
Haha, my version makes a websocket connection to the official Firebase that the HN servers already send everything to, so it is zero extra load on HN
Yours is the MUCH better approach. When I did it, no api!
That sounds interesting, are there any public details on this? Is it https://github.com/HackerNews/API ?
You can stream the ID of the most recent item with something like this:
Then you will need to iterate through the new item IDs and fetch them, e.g.https://hacker-news.firebaseio.com/v0/item/45534174.json
Afaict, HN is still running on a rusty potato. The software's written well, so it doesn't need to run on more than that. (What's going to happen to it? Someone links to it from HN?)
less rusty as of ~sep 2024 :-) https://news.ycombinator.com/item?id=44099006
They've upgraded to a multi-eye potato‽ Truly the wonders of modern technology have no bound.
I mean, I love what it says about the insanity of some of the tech stacks that we run these days.
The potato with multiple eyes works to serve one of the more important and trafficked properties on the Internet.
> one of the more important and trafficked properties on the Internet.
I like HN, but it's really only important within a very niche subset of the Internet, and it also doesn't have much traffic. There's like a single post submitted every two minutes. That's not much.
Given how often a link that makes it to the front page struggles to serve the incoming traffic from HN, it's fair to say HN is quite efficient
I think [being unable to handle peak load] does not imply efficiency. Unless perhaps this is a joke that I’m misunderstanding?
Nonetheless, i suspect that HN probably is quite efficient, just based on what I know about dang. Even so, the parent claim was that it was popular and important, not that it was efficient
I wonder what % of users submit posts and browse new submissions on here and on reddit.
I think it’s single digit percentages that have an account.
90-9-1 principal is considered the truth, though I've not seen any rigorous studies to back that up.
https://www.reddit.com/r/TheoryOfReddit/comments/s2dvr4/the_...
https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture)
At the very least I'm confident it's a power-law distribution. 90-9-1 seems like a reasonable first-order estimate (but I've also seen no data)
A LISPy potato, right?
Source code: https://github.com/jerbear2008/hn-live
And live firebase source/api (run by HN) used for this: https://github.com/HackerNews/API
(just making sure it's obvious - I didn't create this site! all credits to @jerbear4328 who is here on HN - I'll email them to let them know it's trending) :-)
Ha amazing that this is just an HTML file
Very cool! Thanks for sharing!
(edit: terminal version available?)
I'm not the author of this live feed site, but I did just find
https://github.com/ggerganov/hnterm
from the author of llama.cpp (!!).
it also has a beautiful web version thanks to emscripten: https://hnterm.ggerganov.com/
It would be great to have the "on: x y z thread" field included.
This is literally the only thing that seems missing- Stupendous work!
https://jaytaylor.github.io/hn-live2
Enjoy!
I did this too: https://hn.hotgarba.ge/
Edit: Through my own tool I see this comment got insta-marked as [dead], rude.
I wonder if the domain is banned for some reason. Another comment of yours is dead, which includes a link to the same domain.
https://news.ycombinator.com/item?id=39669357
https://hotgarba.ge/
(Edit: Yep, this comment is dead on arrival.)
Love the domain lol
Cool!
Would be slightly more contextual if the title of the original post was displayed.
The HN/Firebase API doesn't make this easy. For https://hnstream.com I ended up crawling items to find the article.
Any tips on respectfully crawling HN so you don’t get throttled? I had an application idea that could not be served by the API (need karma values) so I started to write code to scrape but got rate limited pretty quickly.
I've had no trouble hitting the Firebase API at the speed items are created, with a 5 second delay between retries.
For scraping HN directly, in my experience you have to go extremely slow, like 1 minute between fetching items. And if you get blocked, it may be better to wait a long time (minutes) before trying again rather than exponential backoff, in order to get out of the penalty box. You'll need a cache for sure.
The comments don't even have a thread ID?
Comment items look like https://hacker-news.firebaseio.com/v0/item/45533616.json?pri...:
"parent" can either be the actual parent comment or the parent article, depending where in the comment chain you are.https://jaytaylor.github.io/hn-live2 is doing it though.
As does hnstream.com from the sourced sample comment itself. Both just traverse the parent id until it's the root (article). It takes more queries, but the API is not rate limited.
It wouldn't take more queries if the comments were cached. It could probably be done entirely in memory, HN's entire corpus can't be that large.
If one were to start at the page endpoints (eg /topstories) one could add references to origin ids while preloading comments, and probably cover the most likely to be referenced ID, and even make traversal up the tree even more efficient.
Perhaps @kogir, who was active on https://github.com/HackerNews/API could add the thread id.
I tried this, but it required making a request for every comment and would probably call for a backend, wheras this can run just off of the Firebase websocket stream on a static HTML file.
I found it a fun game to guess what I thought the post was about.
If you want a live version of most of the site (including tracking new comments in items you've already looked at), I made this to improve my React skills when the HN API first came out:
https://insin.github.io/react-hn
Wow it looks really cool, I think I might enjoy actually using it as the main way to view the site
I was going to say the same thing too!
I just wish there was a dark mode.. Let me install night reader to see how well it looks :p
Edit: would've loved if there was a way to sign up or make comments from there, oh well :< I wish there was, I am not sure if that's possible tho but I hope it could be.
I checked the github repo and I found out that its a single static html page and ahh I forgot to see that its hosted on github.io as well because of their static hosting, somehow I didn't see it!!
Static hosting is super cool and I am going to tinker with it to have monospace-web instead for something like this https://owickstrom.github.io/the-monospace-web/
Thanks a lot for creating this! Appreciate it, I haven't read the source code but I am going to read it oneday (If I don't procastinate lol)!
Have a nice day!
I often wonder if dang and tomhow have something like this running.
I wonder to what extent they read everything which is posted here.
Not even close.
https://news.ycombinator.com/item?id=20652157
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
I moderated a few chat rooms and dreamed of all this complex tooling I could build to automate and help me fix issues right away. Of course, the tiny chat rooms didn't need any of that. The hard part is actually moderating fairly and trying to figure out where the boundaries are. I couldn't do it. Makes sense why pg said "don't start a forum".
A person would lose their mind pretty quickly this way.
Great potential job for an LLM to potentially spot abuse. The existing mod tools/system seem to work quite well.. I like HN how it is!
I know they get an alert whenever a comment is flagged. I assume they get an alert whenever certain "problematic" accounts post. Maybe they get an alert whenever the flamewar detector goes off? This forum is written in a Lisp so they can probably hotpatch whatever code they want and add whatever filters they want.
But dang has said numerous times that he doesn't read everything, that would probably not even be feasible.
The question is, YC being what it is, are they using LLMs to automate moderation or do sentiment analysis?
We're working on the latter but it's not yet in production. We're only going to deploy it once it's clear that it significantly improves on what mods + the existing software are already doing. The hard part, of course, is how to evaluate that.
Lol. Imagine them having a whole room of tv's running it...
Could be a cool movie scene where the stream of data is actually live HN comments lol.
Another great use of the HN API! It would be nice to filter it somehow to threads where I have commented, sometimes I don't notice until days later that someone replied to me.
To shill my own use of the API:
I did an animated "replay" view for historical threads like Rust 1.0 launch: https://hn.unlurker.com/replay?item=9551937
And a (static, refreshable) view of recent activity grouped by topic: https://hn.unlurker.com
You know about https://www.hnreplies.com/ ? Sends you a mail when someone replies to your comment.
Didn't know about this! Thanks, I just wish something like this was on signal or discord but maybe that's because I don't operate much mail and I really like signal...
But anyways, just as a headsup for a moment I was confused that I wasn't getting the verification mail and I was literally going to comment it but then I got the mail just in time, so maybe to anyone out there, maybe be a bit patient as its worth it (or it could totally just be an issue from my side as I spammed it twice or maybe it got congested I am not sure)
EDIT: It wasn't that they sent me 2 mails but rather that they sent me 1 mail and I clicked on it and then got another and I thought it was because of me spamming verification twice but it turns out that they sent me a mail because someone had responded to me :)
Neato. I really like this.
Yeah, you need to give it a moment. In active discussions I usually see the reply before getting the notification. I guess the service needs a bit to detect responses, and then each email need a bit of time as well.
This has been very reliable for me in the past, great project.
perfect, thanks
This is so cool. Waiting to see my own comment
Saw it!
test
Of course, the next thing is to watch for edits and deletes. What gets written and then gets thought better of. What does raw and unfiltered HN think?
Pretty cool! I must manually refresh to see new posts. Implementing real-time updates (e.x. via WebSocket or Server-Sent Events) would significantly improve usability.
It sounds cool, but for usability its not great. Think about how reddit recalculates the results order as you're paging through, and you see items from the previous page show up on the next. Now imagine that happening in realtime. Maybe there's a link you want to read, but you get pulled away for 10 minutes. By the time you get back the link is higher or lower, or may be completely missing.
As a side feature, that would be neat though.
That is a good point, however there are design solutions around it. For example: - Poll every X seconds instead of real-time - Enable user to toggle real-time mode - Load new posts in with a "+" button at the top to fetch the latest posts (like Twitter)
That's a good idea. Maybe bind that to a key (what about Ctrl-R). And maybe you can put this feature into UAs, since it would be useful for more websites.
Writing this comment just to see how quickly it shows up
Edit: Felt like it took almost a minute, I wasn't timing precisely because I didn't expect it to take so long!
It (hopefully) took exactly 30 seconds, the page delays every item until 30 seconds after its posted date. It doesn't poll HN's server, it opens a websocket to the official HN Firebase, and without the delay, items appear in large chunks. I'm pretty sure the HN server syncs with Firebase every 30 seconds, so this is as fast as it can go while still being accurate.
Interestingly some comments appear as [delayed].
Fun! A css animation would make the updates less jarring and the feed easier to read
Cool idea.
Separately I didn’t know posts/submissions could be instantly flagged dead and I didn’t know there are decade+ old bots still configured to spam this site https://news.ycombinator.com/submitted?id=VivaTechnics
(2024) Show HN from the dev https://news.ycombinator.com/item?id=39509910
It works, but I'd advise about using the same branding. Consider making it significantly different than the website's comments page to avoid any issues, since it's not part of the webapp nor sanctioned.
Ignore this guy, lol.
Okay, maybe you could explain why copying the look of a product to look exactly like the said product, without it being warranted is a worth ignoring and funny?
This is awesome!
Bets on how overwhelmingly negative the comment sentiments are? Anyone got an LLM handy to analyze that
a very very quick copy/paste of 73 comments there - fed to claude with a basic "characterize+count comments" give me 70% neutral/technical, 14% positive, 16% negative.
claude also said "The negative comments tend to be more substantive critiques of systems or research approaches rather than personal attacks." - so... that's good? :-)
I've done it before on large threads. Neutral sentiment tends to dominate at roughly 50% but negative sentiment is much higher than positive sentiment. Negative sentiment is also higher if you just take the highest ranked comments.
It's actually quite stark looking at it this way and makes me question how healthy it is to use this site as often as I do. Fully aware that we're also piling on the negative sentiment.
This is pretty sweet
[dead]