Hacker Newsnew | past | comments | ask | show | jobs | submit | qsort's favoriteslogin

I'm from a very poor Appalachian town. My only option to better my life was to get up and leave.

People from my hometown do talk about the good old days. People worked at union factories and my grandfather worked a well paying railroad job. My no-name town of 1000 people had a train station that made it possible to go to NYC. My grandpa got paid a handsome retirement from the railroad company. When he died, my grandmother was able to receive his benefits.

My hometown votes against building railways. The station has long crumbled. They vote against unions. The factories are long gone. They've voted against any sort of retirement benefits. The elderly are struggling and depending on churches handing out food.

Even if those factories come back, they'll be paid less than my ancestors did. They'll never have an affordable link to cities hours away. They'll never get the retirement benefits my ancestors had. And if you mention giving them these benefits, they yell and say they don't want them. The youth in my hometown who worked hard in school (we somehow had a decent school, all things considered) used their education as a ticket out. Now the people there are pissed and they're coming for education next.

These people don't want "the plant." They want to be young again, without understanding that their youth was great because my ancestors busted their asses to give us great opportunities. They squandered everything that was given to us.


It keeps bouncing between too real and too absurd.

Absurd: Not enough power on our sailboat to run Ableton and Photoshop.

Real: So we replaced it with open source technology.

Absurd: That technology was based on Electron.

Real: Electron was too bloated.

Absurd: So we ported everything over to the NES.

Real: And now you can run our software anywhere you can emulate an NES


This goes to the core problem of the Tesla self driving system though - it's designed to encourage stupid people to do stupid things. Yes, it's a funny idea, but what this guy has actually done is go out and operated a motor vehicle recklessly and put everyone around him in danger. In my country it's highly likely this driver would be prosecuted for a number of offences if they could identify him (which I'm guessing wouldn't be that tricky).

The Devil is in the details. And in the case of the X11 Devil, they're literally named "detail", and they sound weird enough for Elon Musk to name his children after.

https://tronche.com/gui/x/xlib/events/window-entry-exit/

    typedef struct {
        int type;  /* EnterNotify or LeaveNotify */
        unsigned long serial; /* # of last request processed by server */
        Bool send_event; /* true if this came from a SendEvent request */
        Display *display; /* Display the event was read from */
        Window window;  /* ``event'' window reported relative to */
        Window root;  /* root window that the event occurred on */
        Window subwindow; /* child window */
        Time time;  /* milliseconds */
        int x, y;  /* pointer x, y coordinates in event window */
        int x_root, y_root; /* coordinates relative to root */
        int mode;  /* NotifyNormal, NotifyGrab, NotifyUngrab */
        int detail;
            /*
            * NotifyAncestor, NotifyVirtual, NotifyInferior, 
            * NotifyNonlinear,NotifyNonlinearVirtual
            */
        Bool same_screen; /* same screen flag */
        Bool focus;  /* boolean focus */
        unsigned int state; /* key or button mask */
    } XCrossingEvent;
    typedef XCrossingEvent XEnterWindowEvent;
    typedef XCrossingEvent XLeaveWindowEvent;
More details (these are just the "normal" ones, just wait till you read about "abnormal" NotifyGrab and NotifyUngrab mode and input focus events, and how grabbing interacts with input focus, and key map state notifications):

https://tronche.com/gui/x/xlib/events/window-entry-exit/norm...

https://tronche.com/gui/x/xlib/events/window-entry-exit/grab...

https://tronche.com/gui/x/xlib/events/input-focus/

https://tronche.com/gui/x/xlib/events/input-focus/normal-and...

https://tronche.com/gui/x/xlib/events/input-focus/grab.html

https://tronche.com/gui/x/xlib/events/key-map.html

And then you have colormaps and visuals:

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

>The color situation is a total flying circus. The X approach to device independence is to treat everything like a MicroVAX framebuffer on acid. A truly portable X application is required to act like the persistent customer in Monty Python’s “Cheese Shop” sketch, or a grail seeker in “Monty Python and the Holy Grail.” Even the simplest applications must answer many difficult questions:

    WHAT IS YOUR DISPLAY?
        display = XOpenDisplay("unix:0");

    WHAT IS YOUR ROOT?
        root = RootWindow(display, DefaultScreen(display));

    AND WHAT IS YOUR WINDOW?
        win = XCreateSimpleWindow(display, root, 0, 0, 256, 256, 1,
                                  BlackPixel(
                                      display,
                                      DefaultScreen(display)),
                                  WhitePixel(
                                      display,
                                      DefaultScreen(display)));

    OH ALL RIGHT, YOU CAN GO ON.

    (the next client tries to connect to the server)

    WHAT IS YOUR DISPLAY?
        display = XOpenDisplay("unix:0");

    WHAT IS YOUR COLORMAP?
        cmap = DefaultColormap(display, DefaultScreen(display));

    AND WHAT IS YOUR FAVORITE COLOR?
        favorite_color = 0; /* Black. */
        /* Whoops! No, I mean: */
        favorite_color = BlackPixel(display, DefaultScreen(display));
        /* AAAYYYYEEEEE!! */
        (client dumps core & falls into the chasm)

    WHAT IS YOUR DISPLAY?
        display = XOpenDisplay("unix:0");

    WHAT IS YOUR VISUAL?
        struct XVisualInfo vinfo;
        if (XMatchVisualInfo(display, DefaultScreen(display),
                             8, PseudoColor, &vinfo) != 0)
            visual = vinfo.visual;

    AND WHAT IS THE NET SPEED VELOCITY OF AN XConfigureWindow REQUEST?
        /* Is that a SubstructureRedirectMask or a ResizeRedirectMask? */

    WHAT??! HOW AM I SUPPOSED TO KNOW THAT? AAAAUUUGGGHHH!!!!
        (server dumps core & falls into the chasm)

This is quite darkly hilarious tbh. And consistent with my experience of Blackpool.

I once went there on couple-week-long driving course when I was a teenager, hoping but eventually failing to get my license. I was put up in someone's house that was run as a sort of unlicensed b&b. The entire place, the curtains, the linen, the pillow,.. all smelled of old nicotine and damp. On my first evening I went out by myself to a fish-and-chip shop, feeling like a silly city boy in a greasy gritty concrete town. Stuck out like a sore thumb. I remember walking, late evening, lonely, on a bridge over the railtrack. The entire place felt deserted, metallic and concrete.

I went back to my room and ate the oily chips on my bed. After the first day with an old crusty – but perfectly lovely driving instructor – and a rather large heavy-haulage driver who was renewing a license, we went out for some drinks. Beers. Lots of beer. And they took me to a gay club – of which there are oddly many in Blackpool – because they thought it'd be a laugh. It was actually a massive spectacle for me. It was the first time I saw older men kissing, right there, by the entrance on a old tawdry sofa. I was still on a journey of coming out, so it felt oddly enlightening or validating or something. It was an old-england gay club – the type you hear about in the era of stonewall.

The entire town was like a time capsule to a poorer apocalyptic britain. Betting shops, cheap nail salons, boarded up derelict buildings everywhere! Even the beach was deserted. It reflected the same depressing crumbling economy of coastal towns all over the UK. It felt like a shadow of its former self, but somehow, there was a old english magic to it that I can still feel. My nostalgia is probably getting the better of me, but I remember it fondly.

So yeh I think I understand this SLS thing. In places like Blackpool – forgotten remnants - you can feel the depression in the paving stones – the grey withering vitality – swallowing you whole.


>> But it doesn’t mean the principles of machine learning (which are loosely derived from how the brain actually works) apply only to text data or narrow categories of data like you mentioned.

When you say "the principles of machine learning", I'd like to understand what you mean.

If I were talking about "principles" of machine learning, I'd probably mean Leslie Valiant's Probably Approximately Correct Learning (PAC-Learning) setting [1] which is probably the most popular (because the most simple) theoretical framework of machine learning [2].

Now, PAC-Learning theory is probably not what you mean when you say "principles of machine learning", nor is it any of the other theories of machine learning we have, that formalise the learnability of classes of concepts. That's clear because none of those theories are "derived from how the brain actually works", loosely or not.

Mind you, there isn't any "principle", of machine learning, anyway, that I know of that is really "derived" from how the brain actually works; because we don't know how the brain actually works.

So, based on all this, I believe what you mean by "principles of machine learning" is some intuition you have about how _neural networks_, work. Those were originally defined according to then-current understanding of how _neurons_ in the brain "work". That was back in 1943, by Pitts and McCulloch [3], what is known as the Perceptron. That model is not used any more and hasn't for many years.

Still, if you are talking about neural networks, your intuition doesn't sound right to me. With neural nets, like with any other statistical learning approach, when we train on examples x of a class y, we learn the clas y. If we want to learn clases y', y", ... etc, we must train on examples x', x", ... and so on. You have to train neural nets on examples of what you want them to learn, otherwise, they won't learn, what you want them to learn.

The same goes with all of machine learning, following from PAC-Learning: a learner is given labelled instances of a concept, drawn from a distribution over a class of concepts, as training examples. The learner can be said to learn the class, if it can correctly label unseen instances of the class with some probability of some degree of error, with respect to the true labelling.

None of this says that you can train a nerual net on images and have it learn to generate text, or vice-versa, train it on text and have it recognise images. That is certainly not the way that any technology we have now works.

Does the human brain work like that? Who knows? Nobody really knows how the brain works, let alone how it learns.

So I don'tthink you're talking about any technology that we have right now, nor are you accurately extrapolating current technology to the future.

If you are really curious about how all this stuff works, you should start by doing some serious reading: not blog posts and twitter, but scholarly articles. Start from the ones I linked, below. They are "ancient wisdom", but even researchers, today, are lost without them. The fact that most people don't have this knowledge (because, where would they find it?) is probably why there is so much misunderstanding on the internet of what is going on with LLMs and what they can develop to in the long term.

Of course, if you don't really care and you just want to have a bit of fun on the web, well, then, carry on. Everyone's doing that, at the moment.

____________

[1] https://web.mit.edu/6.435/www/Valiant84.pdf

[2] There's also Vladimir Vapnik's statistical learning theory, Rademacher complexity, and older frameworks like Learning in the Limit etc.

[3] https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCullo...


Ok, so this article is at least contributing to the good side of the discourse about undefined behavior, because it isn't actually delusional about reality. (I mean this in the sense of people being deluded/having the wrong notions about actual facts, by the way, rather than actual craziness. The people who are deluded about undefined behavior tend to say things like "C is definitely a portable assembler" or "compilers are trying to trick me" which are definitely false. A non-deluded person can be upset about the current state but understanding how C currently works is important if you want to discuss it.)

Anyways, to the actual content of the article, I agree with it but I think the frustration it accepts as reasonable is actually misguided. Here is my understanding of how things ended up for C (disclaimer: I was not born when most of this stuff happened.)

In the beginning, you had a C compiler for your computer, and it was basically just an assembler. This is what the "make C great again" people think the language really is under the hood, by the way. However, very quickly people realized that they want their C code to run elsewhere, and every computer does things differently, so they needed to have some sort of standard of what was approximately the lowest common denominator for most machines and that become the C standard for what it is legal to do. The guarantee created at that point was that if you conform to the standard, every implementation of C has to run your program as the standard specifies. This was palatable to people because they had a bunch of machines with weird byte orderings or whatever, and it was obvious what would happen if the dumb compilers of the time translated their platform-specific code to a new architecture.

Later, the weird architectures started becoming rare. At the same time, though, a new architecture started growing: a virtual architecture, one where the compiler would actually "port" your code to the exactly same processor you were compiling for before, but the code would run faster. It would do this by starting to take latitude through intermediate transformations which it assumed it could do because your program should have been portable to the abstract machine.

Now, this completely weirded people out, because "I'm compiling to a new virtual architecture called 'x86-64 -O3' that is the same as 'x86-64 -O0' but faster and more restrictive" sounds really stupid. It's the same architecture, and they're not even real processors! But if you really think about it compilers are really just taking advantage of the fact that your code is portable, because it works in the space of the C abstract machine, to do a "port" called "run a bunch of optimization passes". People understand when a port ends up causing a trap on another processor because of course it does that on the new platform. But getting people to understand that your unaligned accesses on the "-O0" machine are no longer valid on the "-O3" machine, because, again, the instructions that come out look awfully similar and straightforward most of the time, except for the weird times where a change "surprises" you because the transition between the two crossed through an invalid space. Kind of like a path that seems to have a "weird jump" because it normally crosses through 3D space and at some point someone found a shortcut through the fourth dimension.

Anyways, the performance virtual architecture is all well and good, but what I think will be interesting moving forward is the security virtual architecture, where overflows and out-of-bounds accesses and type confusions are focused on more. Right now as a side effect of performance optimization they end up causing headaches for people, but Valgrind/sanitizers are an interesting look into what compiling to "x86-64 for security testing" architecture looks like. The logical next step is even more exciting, because we're actually starting to deploy real architectures with security-focused features that will require ports that are every bit as concrete as any other physical architecture difference, which I think will "legitimize" this mindset to the people who I called deluded at the start of this now very rambly comment. Page protections means that "const" is not something you can ignore. Pointer signing can mean that your "but they're the same bits underneath!" type confusions are no longer valid. ARM's Morello now means you can't play fast-and-loose with your pointers anymore; they're 128+ bits and you can't just decide you want forge one out of an integer anymore without caring. Ports to these architectures absolutely rely on the existence of a C abstract machine, which has served pretty well considering that its existence is really just what a piece of paper says is legal or not, rather than something really planned beforehand.


Welcome! We find you equally anomalous. Any tips on avoiding HackerNews for 10 years?

If I never hear the word "opinionated" again in the context of software engineering, ever in my life, that'd be nice.

"LLMs have solved the language processing problem."

No, it hasn't. It has made great strides in certain directions, but is woefully deficient in others.

For one thing, for as impressive as it looks, it's really quite difficult to use it. The "knowledge" it has about text isn't an obviously usable understanding of the grammar and some other symbolic representation of the information that can be used by other technologies, it's more a self-referential knowledge embedded in opaque neural net values that can almost only be used to extend the text. That is not 100% true, but using it for anything else is really hard. I would expect a technology that has "solved" the language processing problem to be useful for a much wider variety of topics, to be, for instance, something I could hook up to the Unix shell to provide a safe and reliable human language interface to it.

(Note that prompting this model for some shell commands is light years from what I mean; along with the dangerous unreliability of the resulting commands, it also isn't actually hooked up to your shell and operating in a space where it knows your directories, files, and their contents. I want "give me a list of files relating to my business deals with Comcast" to literally work on the shell, to the point the result could then be piped to some other plain-text request, not what ChatGPT can do.)

"Its responses are fluent and rapidly becoming indistinguishable from human output."

Again, in some directions.

"It’s often bullshitting, in the sense that it’s convincing prose often turns out to not reflect reality."

While I acknowledge the second clause makes your sentence accurate on your terms, I still would say it's a better way to think of LLMs that it is always bullshitting, always confabulating. It's just built in a way that the maximum-probability confabulation of a factual topic may happen to be the fact. I mean, if you think about it, that shouldn't seem that surprising a statement. I've done it myself in real life, just accurately guessed a fact I didn't really know. Less often than I've been wrong when making such guesses, of course, but it's still something not outside our experience range.

But it's not like the LLM in any sense "knows" when it is telling the truth and "knows" when it is bullshitting. You can get spun into a real philosophical tizzy about what that "knows" means, but it doesn't matter, because for any sensible definition this particular model doesn't "know". It isn't built to "know". It just spits out the maximum probability continuation. This is not a claim about all AI architectures, it's just about LLMs. They don't know. Even if you convince one to output that it doesn't know, it doesn't know that either, it just thinks that's the most likely logical continuation. While this may be surprising to anyone who has been on the Internet for a while, there are in fact samples of people saying "I don't really know" on the Internet that would have worked into the training data.

I will say this; I think GPT does serve as the final refutation of the Turing Test as a measure of AI. It turns out to be entirely possible to build an "AI" that is basically optimized to pass it, yet, strangely incapable of almost anything else we would consider an "AI" to be.

I'm actually much more impressed by GPT than I sound on HN. It is a legitimately interesting technology and very impressive. The problem is, the Gartner hype cycle has massively overshot the degree to which it is interesting and impressive, so I sound very down on it as I try to bring people back to Earth. I keep going back to the video game metaphor... video games look far more interesting and complex than they often actually are, because the graphics can look so amazing and awesome and we've made such exponential advances in that field over the past few years that we can forget that the state of the art in, say, NPC dialog, remains the "choose your own adventure" dialog tree that was a familiar staple in games for over 30 years now, with the only elaboration since then being full speech voice acting (by humans). ChatGPT is the video game graphics of the AI world. Still impressive on its own terms! It's amazing what we pump to our screens at 120Hz/4K with the right hardware... yet, what is behind those graphics is nowhere near as impressive. Similarly for ChatGPT. Very impressive! But it's a munchkin that dumped all its stat points into "sounding good to humans" but doesn't have hardly any stat points anywhere else.


HN: one of the last remaining Great Good Places of the Internet, a lone tavern in an iconic gateway town to the now not-so-wild west.

Beyond the western borders of this little town, the tech gold rush has both expanded to epic proportions, affecting all the economies in the world, and also gone through enough booms and busts that the phrase "gold rush" seems somehow off.

As more and more young'uns join and jaded veterans return to throng the tavern alike, it often seems to be on the brink of either exploding with the largest gun fight in history, or jumping the shark.

And yet, against all odds, it retains its original magnetism - drawing throngs that grow in number and diversity while seers like https://news.ycombinator.com/user?id=patio11 and https://news.ycombinator.com/threads?id=tptacek continue to return - dispensing worldly wisdom worth its weight in gold from corner tables.

The secret is the man at the corner of the bar @dang, always around with a friendly smile and a towel on his shoulder. The only sheriff in the west who still doubles as the friendly bartender: always polite, always willing to break up a fight with kind words and clean up messes himself.

Yes a cold-hard look from him is all it takes to get most outlaws to back down, yes, his Colt-45 "moderator" edition is feared by all men, but the real secret to his success: his earnest passion (some call it an obsession) for the seemingly sisyphean task of sustaining good conflict - letting it simmer but keeping it all times below the boiling point based on "the code":

"Conflict is essential to human life, whether between different aspects of oneself, between oneself and the environment, between different individuals or between different groups. It follows that the aim of healthy living is not the direct elimination of conflict, which is possible only by forcible suppression of one or other of its antagonistic components, but the toleration of it—the capacity to bear the tensions of doubt and of unsatisfied need and the willingness to hold judgement in suspense until finer and finer solutions can be discovered which integrate more and more the claims of both sides. It is the psychologist's job to make possible the acceptance of such an idea so that the richness of the varieties of experience, whether within the unit of the single personality or in the wider unit of the group, can come to expression."

May the last great tavern in the West and it's friendly bartender-sheriff live long and prosper.


> fatigued by algorithmic nonsense

Let's not participate in the corruption of the word "algorithm" here on HN, too. We wouldn't tolerate people saying "integral" when they mean "derivative", so we shouldn't tolerate this, either. We can afford to abstain from this sort of quick-and-easy but sloppy use of language as a shorthand for generic memetic outrage.


The idea is to oppress tedious communication so curious communication can flourish. It's impossible to have both.

I realize there's a critique of gardeners which argues that nobody should ever pull weeds, or even label any plant a weed—but I think most people come here for the flowers, and for that there needs to be a shit-ton of weed-pulling.


Was going to edit this in but it quickly turned into a related rant about the state of things: if you don't want that, scroll past.

The real problem is that there's no more room for the technological "middle class". We've gotten to the point where you cannot be SLIGHTLY interested in computers, want to make a quick app for yourself, or make something small to share with the world. You either have to have no interest at all (and live in blissful ignorance) or be so into this type of stuff that you might as well just make it a career. In between these two extremes lies a valley of frustration and needless complexity, where everything is always breaking and changing because we found out what was used 2 years ago is actually bad but THIS TIME IS DIFFERENT! because now we found a solution that is totally going to work forever this time. It's exhausting to even think about.

Many people got their start programming with something like messing with simple Visual Basic stuff. I, and I'm sure many others who are reading this, started with a TI-83 and a desire to cheat on my math quiz. I know that I'm going a little off the rails here, and this has become more of a rant than anything meaningful, but I really, really hate how there's quickly becoming no space for that sort of thing - nowhere for people formerly uninterested in technology and programming to get their feet wet, nowhere to make a quick application to accomplish a small task, etc..


Racket, what an odd part of my life.

After learning QBASIC and then quickly pivoting to Python thanks to the advice of my peers on 711chan IRC, I read ESR's "How to become a hacker" which lauded Lisp as this magical language that will forever change the way you think. Wow, powerful stuff for a teenager to hear. I want to have elevated, magical thinking. So I downloaded DrRacket.

Years later, after a few years that were the beginning of a career in software, I was in jail. I took some math classes where they gave us programmable calculators. I was fucking enthralled to be able to code again. I used TI-BASIC to make a little lisp interpreter, which I called Prison Lisp.

So many games, all the glee of a child in gradeschool (which is comparable to incarceration in many other ways). The people in my pod (shared living dormitory type thing) all asked to play the games, it was like having a gameboy.

Out again, mid twenties, start to code again. Somehow, a few years with no computers hadn't affected my ability to code in the least. I was actually much better than before. I start competing for gigs on Upwork, supporting myself pretty comfortably. I get married, start a family.

My wife loves coding challenges, but has no professional ambitions in tech. So I show her Racket, she loves it. My daughter is curious and wants a different lisp, so I show her Clojure. She hates it, sticks with Python. No judgement, I'm a lifelong Pythonista at this point.

I think back, to a friend from that 711chan IRC chat. He was learning Perl and I had recommended he try out something else, taught him a bit of Racket. I Googled him a year or so ago, turns out he ended up making several contributions to the Racket source code, and is a big shot electrical engineer. Good for him.

Life is so beautiful and strange.


I've seen things you people wouldn't believe. Millions burnt on consultants and licensing Oracle. I watched C series startups throwing it all away in a move to NoSQL. All those Amazon RDS fees will be lost in time.

I agree, it's like they looked at the typical C++ compilation experience and thought "I wish Matlab had those issues too!"

I bought my 1040st, and a dot matrix printer, with most of my first year's student loan. (Almost all of it.)

I was so embarrassed I told no one, even my girlfriend at first. It was a big expensive secret. My car was a Chevette, with a bad tranny. Anyone in their right mind would have bought a better car.

Not one teacher could differentiate type written papers, from dot matrix. Besides the fear of public speaking, I always worried a teacher would find out I didn't write my paper on a twpewritter.

I tried to get my girlfriend to like it, but it was a big No. I remember drawing, in Paint, a penis. I even got it to vibrate. (It was not easy back then.)

She thought it was cute, and we had sexy time, but she didn't like the machine. I could just tell she didn't like my computer.

In retrospect--maybe she thought I should spend that kind of money on her?. I remember thinking, I'm a glad about equal rights. After all she was from a wealthy family, and I was poor. (sorry about reminiscing.)

Now most of you think it was my dick pic, but it wasen't. I think she thought the computer was nerdy?

She wasn't happy with my Bullmastiff either, but I loved my dog. She didn't like it when my dog had her period. She once said to me, "It's the dog, or me?". I didn't say anything, but my Elsa was my family. It was no decision to be made. I would have died for that dog.

I don't think we ever liked each other, but boy was she attractive.

She's now some big wig at a computer company, and consultes on computing.

I only bought it for word processing.

I wasen't a computer guy. I just knew at the time college was a joke, and if I was going to write all those useless papers; I wasen't going to do it on a IBM, with Whitout.

I felt like I was cheating.

Got my degrees. Had a nervous breakdown. The world seemed grey, and depressing.

I got tired of looking at that horrid beige idle computer on my desk one day, and tossed it all. (I'm color blind, but thought that beige was horrid.)

I wish I kept it now though.

(I miss my dog, and computer. My girlfriend not so much, but she was stunning. Way out of my league.)


I was thinking about this this evening, and thought of another way to explain the "expected value of a thread" concept (or as I sometimes put it: the value of a post is the expected value of the subthread it leads to - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...), which is key to HN discussion.

The thing to understand is that HN threads are supposed to be conversations. A conversation isn't a one-way message like, say, a billboard or a PA announcement. It's a two-way or multi-way co-creation. In a community like HN, it's a multi-way co-creation with a very large fanout.

In conversation, to make high-quality comments you have to take other people into account. If you treat your comment only as a vehicle for your own opinions and feelings—if you leave out the relational dimension—then you're not in conversation. (I don't mean you personally, of course; I mean all of us.)

Conversation means being conscious, while speaking or writing, of whom you're talking to and how what you're saying may affect them. In a forum like HN it means being conscious of the range of people you may be affecting. In conversation, your utterances are not your disconnected private domain for you to optimize as you see fit. You're responsible for the effects you have on the conversation.

I know that some people will read this and think: you're censoring me! you're telling me I can't say what I think or feel! you just don't like my opinions! No no no—that's not it at all. In conversation, you do say what you think and feel, modulated by the relational sense. That is, you're guided not only by what you think and feel but also by the effect you are having, or are likely to have, on others. The goal is to have the best conversation we can have. If we get that right as a community, there's room for what everyone thinks and feels.

Look at it this way. When you're in a relationship with someone, do you bluntly blast them with whatever you're thinking and feeling on any sensitive topic between you? Of course you don't—not if you don't want to stay up all night fighting. What do you do instead? You find a way to say what you think and feel while taking into account what they think and feel. You do it genuinely, not faking it, and you find a way to show that you're doing it.

A lot of HN commenters are going to say: "don't tell me I'm in any fucking relationship with these assholes". Actually you are—that's exactly what you are, whether you want to be or not. You showed up at the same time they did. It may be a weakly cohesive relationship—not like protons and neutrons, more like bosons [1]—but relational dynamics still apply.

If that's too strong a metaphor, try this one: conversation is a dance. When you're dancing with someone, do you only take into account how you want to move and where you want to go? Of course not; that would end the dance. And you certainly don't move in a way that is likely to rub them the wrong way—why would you? It wouldn't serve your purpose, which is to have the best dance.

Other commenters will object: how am I supposed to know in advance how my comment is going to land with others? That's impossible! Well, you can't know exactly, and you don't have to. All you have to do is take it into account. If you take that into account and get it wrong, you'll naturally adapt.

There's one other layer to this. We have to take into account not just the others who are present and how our comments may land with them, but also the medium that we're all using. On HN, the medium is the large, public, optionally anonymous internet forum, and this comes with strengths and weaknesses that shape conversation. In communication, what gets communicated is not the original message you think you're sending, but rather the information that actually gets received by other people, and this has less to do with content than we think it does. It has just as much to do with the medium. Don't underestimate this! McLuhan got it right [2]. Internet forum comments are a mile wide, in the sense that you can say whatever you want, no matter how intense or outrageous—and an inch deep, in the sense that they come with almost no context or background that would help others understand where you're coming from.

We don't seem to have figured much out yet about how this medium works or how best to use it, but I think one thing is clear: because internet comments are so low-bandwidth and so stateless, each comment needs to include some signal that communicates its intent. There are plenty of ways to do this—simply choosing one word instead of another may suffice—but the burden is on the commenter to disambiguate [3]. Otherwise, given the lack of context and large fanout that define this medium, if a message can be misunderstood, it will be—and that's a recipe for bad conversation, which is in none of our interests.

Can we really develop this capacity collectively? Hard to say, but I don't think millions of people have to get it. We just need a large enough subset to deeply take this in—enough to affect the culture. Then the culture will replicate.

[1] I don't actually know anything about bosons

[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[3] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...


The community reflects the larger society, which is divided on social issues. Don't forget that users come from many countries and regions. That's a hidden source of conflict, because people frequently misinterpret a conventional comment coming from a different region for an extreme comment coming from nearby.

The biggest factor, though, is that HN is a non-siloed site (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...), meaning that everyone is in everyone's presence. This is uncommon in internet communities and it leads to a lot of misunderstanding.

(Edit: I mean internet communities of HN's size and scope, or larger. The problems are different at smaller size or narrower scope, but those aren't the problems we have.)

People on opposite sides of political/ideological/cultural/national divides tend to self-segregate on the internet, exchanging support with like-minded peers. When they get into conflicts with opponents, it's usually in a context where conflict is expected, e.g. a disagreeable tweet that one of their friends has already responded to. The HN community isn't like that—here we're all in the same boat, whether we like it or not. People frequently experience unwelcome shocks when they realize that other HN users—probably a lot of other users, if the topic is divisive—hold views hostile to their own. Suddenly a person whose views on (say) C++ you might enjoy reading and find knowledgeable, turns out to be a foe about something else—something more important.

This shock is in a way traumatic, if one can speak of trauma on the internet. Many readers bond with HN, come here every day and feel like it's 'their' community—their home, almost—and suddenly it turns out that their home has been invaded by hostile forces, spewing rhetoric that they're mostly insulated from in other places in their life. If they try to reply and defend the home front, they get nasty, forceful pushback that can be just as intelligent as the technical discussions, but now it feels like that intelligence is being used for evil. I know that sounds dramatic, but this really is how it feels, and it's a shock. We get emails from users who have been wounded by this and basically want to cry out: why is HN not what I thought it was?

Different internet communities grow from different initial conditions. Each one replicates in self-similar ways as it grows—Reddit factored into subreddits, Twitter and Facebook have their social graphs, and so on. HN's initial condition was to be a single community that is the same for everybody. That has its wonderful side and its horrible side. The horrible side is that there's no escaping each other: when it comes to divisive topics, we're a bunch of scorpions trapped in a single bottle.

This "non-siloed" nature of HN causes a deep misunderstanding. Because of the shock I mentioned—the shock of discovering that your neighbor is an enemy, someone whose views are hostile when you thought you were surrounded by peers—it can feel like HN is a worse community than the others. When I read what people write about HN on other sites, I frequently encounter narration of this experience. It isn't always framed that way, but if you understand the dynamic you will recognize it unmistakeably, and this is one key to understanding what people say about HN. If you read the profile the New Yorker published about HN last year, you'll find the author's own shock experience of HN encoded into that article. It's something of a miracle of openness and intelligence that she was able to get past that—the shock experience is that bad.

But this is a misunderstanding—it misses a more important truth. The remarkable thing about HN, when it comes to social issues, is not that ugly and offensive comments appear here, though they certainly do. Rather, it's that we're all able to stay in one room without destroying it. Because no other site is even trying to do this, HN seems unusually conflictual, when in reality it's unusually coexistent. Every other place broke into fragments long ago and would never dream of putting everyone together [1].

It's easy to miss, but the important thing about HN is that it remains a single community—one which somehow has managed to withstand the forces that blow the rest of the internet apart. I think that is a genuine social achievement. The conflicts are inevitable—they govern the internet. Just look at how people talk about, and to, each other on Twitter: it's vicious and emotionally violent. I spend my days on HN, and when I look into arguments on Twitter I feel sucker-punched and have to remember to breathe. What's not inevitable is people staying in the same room and somehow still managing to relate to each other, however partially. That actually happens on HN—probably because the site is focused on having other interesting things to talk about.

Unfortunately this social achievement of the HN community, that we manage to coexist in one room and still function despite vehemently disagreeing, ends up feeling like the opposite. Internet users are so unused to being in one big space together that we don't even notice when we are, and so it feels like the orange site sucks.

I'd like to reflect a more accurate picture of this community back to itself. What's actually happening on HN is the opposite of how it feels: what's happening is a rare opportunity to work out how to coexist despite divisions. Other places on the internet don't offer that opportunity because the silos prevent it. On HN we have no silos, so the only options are to modulate the pressure or explode.

HN, fractious and frustrating as it is, turns out to be an experiment in the practice of peace. The word 'peace' may sound like John Lennon's 'Imagine', but in reality peace is uncomfortable. Peace is managing to coexist despite provocation. It is the ability to bear the unpleasant manifestations of others, including on the internet. Peace is not so far from war. Because a non-siloed community brings warring parties together, it gives us an opportunity to become different.

I know it sounds strange and is grandiose to say, but if the above is true, then HN is a step closer to real peace than elsewhere on the internet that I'm aware of—which is the very thing that can make it seem like the opposite. The task facing this community is to move further into coexistence. Becoming conscious of this dynamic is probably a key, which is why I say it's time to reflect a more accurate picture of the HN community back to itself.

[1] Is there another internet community of HN's size (millions of users, 10-20k posts a day), where divisive topics routinely appear, that has managed to stay one whole community instead of ripping itself apart? If so, I'd love to know about it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: